-
Notifications
You must be signed in to change notification settings - Fork 47
Fix the model config issue, OpenRouter Oauth issue and model refresh … #45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughPropagates per-call mini/nano model variants across agent runner, graph, nodes, and tools; expands tool execute signatures to accept LLMContext; adds OpenRouter OAuth sign-up→sign-in redirect and OpenRouter model auto-fetch with a 60-minute TTL cache; adds unit tests for the OpenRouter cache. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User
participant OAuth as OpenRouterOAuth
participant Panel as AIChatPanel
participant LLM as LLMClient
participant Settings as SettingsDialog
Note over OAuth,Panel: OAuth success → auto-fetch OpenRouter models
User->>OAuth: complete OAuth flow
OAuth-->>Panel: oauth-success (apiKey)
Panel->>Panel: onOAuthSuccess (async)
Panel->>LLM: fetchOpenRouterModels(apiKey)
LLM-->>Panel: models[]
Panel->>Settings: SettingsDialog.updateOpenRouterModels(models)
Settings->>Settings: cache models + timestamp (TTL 60m)
Settings-->>Panel: dispatch openrouter-models-updated
Panel->>User: UI model selectors updated
sequenceDiagram
autonumber
participant UI as AIChatPanel
participant Service as AgentService
participant Graph as createAgentGraph
participant Node as ToolExecutorNode
participant Tool as Tool.execute
Note over UI,Tool: mini/nano propagation for tool calls
UI->>Service: initialize(apiKey, mainModel, miniModel, nanoModel)
Service->>Graph: createAgentGraph(apiKey, main, provider, mini, nano)
Graph->>Node: createToolExecutorNode(state, provider, main, mini, nano)
Node->>Tool: execute(input, { provider, miniModel, nanoModel, ... })
Tool-->>Node: result
Node-->>Service: result
sequenceDiagram
autonumber
participant Browser
participant OAuth as OpenRouterOAuth
Note over Browser,OAuth: Sign-up URL interception
Browser->>OAuth: URL change to https://openrouter.ai/sign-up?...
OAuth->>OAuth: parse URL, check session flag
alt first time
OAuth->>Browser: navigate to https://openrouter.ai/sign-in?... (preserve query)
OAuth->>OAuth: set sessionStorage flag
else already redirected
OAuth-->>Browser: no-op
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks (3 passed)✅ Passed checks (3 passed)
Poem
✨ Finishing Touches
🧪 Generate unit tests
Comment |
Claude encountered an error —— View job I'll analyze this and get back to you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (16)
front_end/panels/ai_chat/core/AgentNodes.ts (1)
5-5
: Fix type-only import misuse of getTools; use Tool interface instead.Using
import type { getTools }
and thenReturnType<typeof getTools>
requires a value import and will fail type-checking. Type the map withTool
directly to avoid a runtime import and circulars.- import type { getTools } from '../tools/Tools.js'; + import type { Tool } from '../tools/Tools.js'; @@ - const toolMap = new Map<string, ReturnType<typeof getTools>[number]>(); - tools.forEach((tool: ReturnType<typeof getTools>[number]) => toolMap.set(tool.name, tool)); + const toolMap = new Map<string, Tool<any, unknown>>(); + tools.forEach((tool: Tool<any, unknown>) => toolMap.set(tool.name, tool)); @@ - private toolMap: Map<string, ReturnType<typeof getTools>[number]>; + private toolMap: Map<string, Tool<any, unknown>>; @@ - }(toolMap, provider, modelName, miniModel, nanoModel); + }(toolMap, provider, modelName, miniModel, nanoModel);Also applies to: 411-416, 422-423
front_end/panels/ai_chat/tools/Tools.ts (3)
660-666
: Template string bug: log never interpolates URL.Literal
'${url}'
is logged. Use backticks.- logger.info('Initiating navigation to: ${url}'); + logger.info(`Initiating navigation to: ${url}`);
3145-3166
: Escape XPath when embedding in JS string to avoid syntax errors/injection.Unescaped quotes in XPath will break evaluation or allow unintended code injection.
- const evaluateResult = await runtimeAgent.invoke_evaluate({ - expression: ` - (function() { - const element = document.evaluate("${xpath}", document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue; + const evaluateResult = await runtimeAgent.invoke_evaluate({ + expression: ` + (function() { + const xpath = ${JSON.stringify(xpath)}; + const element = document.evaluate(xpath, document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue; if (!element) return { found: false }; ... })() `, returnByValue: true });
4189-4190
: Fix WaitTool arg type in getTools union.The union advertises a required seconds property, but WaitTool accepts optional seconds/duration. This mismatch can break type safety at call sites.
- Tool<{ seconds: number, reason?: string }, WaitResult | ErrorResult> | + Tool<{ seconds?: number, duration?: number, reason?: string, reasoning?: string }, WaitResult | ErrorResult> |front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)
539-546
: Make refinement/metadata steps use nanoModel consistently (drop ctx.model fallback).To align with the PR’s nano-only policy for tools, remove
|| options.ctx.model
fallbacks.- if (!options.ctx?.provider || !(options.ctx.nanoModel || options.ctx.model)) { - throw new Error('Missing LLM context (provider/model) for refinement'); + if (!options.ctx?.provider || !options.ctx.nanoModel) { + throw new Error('Missing LLM context (provider/nanoModel) for refinement'); } const provider = options.ctx.provider; - const model = options.ctx.nanoModel || options.ctx.model; + const model = options.ctx.nanoModel; @@ - if (!options.ctx?.provider || !(options.ctx.nanoModel || options.ctx.model)) { - throw new Error('Missing LLM context (provider/model) for metadata'); + if (!options.ctx?.provider || !options.ctx.nanoModel) { + throw new Error('Missing LLM context (provider/nanoModel) for metadata'); } const provider = options.ctx.provider; - const model = options.ctx.nanoModel || options.ctx.model; + const model = options.ctx.nanoModel;Also applies to: 651-656
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (2)
234-275
: JSON from LLM is never parsed; retries don’t actually validate JSON.
extractWithJsonRetry
returns raw text; downstream URL resolution treats strings as objects and silently skips resolution. Parse the JSON and retry on parse errors.- const llmResponse = await callLLMWithTracing( + const llmResponse = await callLLMWithTracing( { provider, model, messages: [ { role: 'system', content: systemPrompt }, { role: 'user', content: extractionPrompt } ], systemPrompt: systemPrompt, temperature: 0.1, options: { retryConfig: { maxRetries: 3, baseDelayMs: 1500 } } }, { toolName: this.name, operationName: 'streamlined_extraction', context: 'json_extraction_with_retry', additionalMetadata: { attempt, maxRetries, instructionLength: instruction.length, treeTextLength: treeText.length } } ); - const result = llmResponse.text; - - logger.debug(`JSON extraction successful on attempt ${attempt}`); - return result; + const text = llmResponse.text || ''; + let parsed: any; + try { + parsed = JSON.parse(text); + } catch (e) { + if (attempt <= maxRetries) { + throw new Error(`Invalid JSON on attempt ${attempt}: ${(e as Error).message}`); + } + // last attempt: rethrow below + throw e; + } + logger.debug(`JSON extraction successful on attempt ${attempt}`); + return parsed;
170-179
: Retry path also returns raw text; parse before returning.Without parsing, URL resolution won’t work.
- const retryResult = await this.retryUrlResolution( + const retryResult = await this.retryUrlResolution( context.schema, context.treeText, context.instruction, extractionResult, unresolvedNodeIds, context.apiKey, attempt, ctx ); - if (retryResult) { - finalData = this.resolveUrlsDirectly(retryResult, context.urlMappings, urlFields); + if (retryResult) { + const parsed = typeof retryResult === 'string' ? JSON.parse(retryResult) : retryResult; + finalData = this.resolveUrlsDirectly(parsed, context.urlMappings, urlFields); - extractionResult = retryResult; // Update for next iteration + extractionResult = parsed; // Update for next iteration } else {Also applies to: 181-188
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (1)
492-506
: Removeany
and use the typed call context for overridesESLint flags
any
. Use the typedcallCtx
directly.Apply this diff:
- // Run the agent - const ctx: any = callCtx || {}; + // Run the agent const result = await AgentRunner.run( internalMessages, args, runnerConfig, runnerHooks, this, // executingAgent undefined, { - sessionId: ctx.overrideSessionId, - parentSessionId: ctx.overrideParentSessionId, - traceId: ctx.overrideTraceId, + sessionId: callCtx.overrideSessionId, + parentSessionId: callCtx.overrideParentSessionId, + traceId: callCtx.overrideTraceId, } );front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)
976-988
: Replaceas any
with a typed ToolExecutionContextThis is the ESLint hit at Line 978. Define a small context type and use it instead of
any
.Apply this diff here:
- toolResultData = await toolToExecute.execute(toolArgs as any, ({ - provider: config.provider, - model: modelName, - miniModel: config.miniModel, - nanoModel: config.nanoModel, - getVisionCapability: config.getVisionCapability, - overrideSessionId: preallocatedChildId, - overrideParentSessionId: currentSession.sessionId, - overrideTraceId: execTracingContext?.traceId, - } as any)); + type ToolExecutionContext = { + provider: LLMProvider, + model: string, + miniModel?: string, + nanoModel?: string, + getVisionCapability?: (modelName: string) => Promise<boolean> | boolean, + overrideSessionId?: string, + overrideParentSessionId?: string, + overrideTraceId?: string, + }; + const toolCtx: ToolExecutionContext = { + provider: config.provider, + model: modelName, + miniModel: config.miniModel, + nanoModel: config.nanoModel, + getVisionCapability: config.getVisionCapability, + overrideSessionId: preallocatedChildId, + overrideParentSessionId: currentSession.sessionId, + overrideTraceId: execTracingContext?.traceId, + }; + toolResultData = await toolToExecute.execute(toolArgs as unknown, toolCtx);No top-level imports needed;
LLMProvider
is already imported as a type in this file.front_end/panels/ai_chat/ui/SettingsDialog.ts (3)
552-563
: Write OpenRouter models to cache when refreshing on provider changeYou only set the timestamp, leaving
openrouter_models_cache
empty. The cache will look “fresh” but contain no models. Persist the model list alongside the timestamp for consistency with the auto-refresh path.if (openrouterApiKey) { try { logger.debug('Fetching OpenRouter models after provider change...'); const openrouterModels = await LLMClient.fetchOpenRouterModels(openrouterApiKey); const modelOptions: ModelOption[] = openrouterModels.map(model => ({ value: model.id, label: model.name || model.id, type: 'openrouter' as const })); updateModelOptions(modelOptions, false); - localStorage.setItem('openrouter_models_cache_timestamp', Date.now().toString()); + localStorage.setItem('openrouter_models_cache', JSON.stringify(modelOptions)); + localStorage.setItem('openrouter_models_cache_timestamp', Date.now().toString()); logger.debug('Successfully refreshed OpenRouter models after provider change'); } catch (error) { logger.error('Failed to fetch OpenRouter models after provider change:', error); } }
1811-1834
: Fetch button path: also populate the OpenRouter cacheSame issue here—timestamp only. Store the models too so TTL-based checks work as intended.
// Update model options with fetched OpenRouter models updateModelOptions(modelOptions, false); - // Update timestamp for cache management - localStorage.setItem('openrouter_models_cache_timestamp', Date.now().toString()); + // Update cache for TTL-based refresh + localStorage.setItem('openrouter_models_cache', JSON.stringify(modelOptions)); + localStorage.setItem('openrouter_models_cache_timestamp', Date.now().toString()); const actualModelCount = openrouterModels.length; // Update the model selectors with the new models - await updateOpenRouterModelSelectors(); + await updateOpenRouterModelSelectors();
1752-1801
: Add explicit return type and fallback to cached models for OpenRouter selectors
- TS lint: add
: Promise<void>
toupdateOpenRouterModelSelectors
.- If
getModelOptions('openrouter')
is empty but a cache exists, hydrate options fromopenrouter_models_cache
to avoid empty selectors after OAuth-driven updates.-async function updateOpenRouterModelSelectors() { +async function updateOpenRouterModelSelectors(): Promise<void> { logger.debug('Updating OpenRouter model selectors'); // Check if OpenRouter models cache is stale and auto-refresh if needed await checkAndRefreshOpenRouterCache(); // Get the latest model options filtered for OpenRouter provider - const openrouterModels = getModelOptions('openrouter'); + let openrouterModels = getModelOptions('openrouter'); + if (!openrouterModels?.length) { + try { + const cached = localStorage.getItem('openrouter_models_cache'); + if (cached) { + openrouterModels = JSON.parse(cached); + } + } catch { + /* no-op */ + } + }front_end/panels/ai_chat/tools/CritiqueTool.ts (1)
79-81
: Remove hard dependency on AgentService API key (blocks OAuth-backed providers)The tool already uses ctx.provider/ctx.nanoModel. Gating on AgentService.getApiKey() will incorrectly fail for OAuth-based flows (e.g., OpenRouter) and is unused in the LLM call.
Apply:
- const agentService = AgentService.getInstance(); - const apiKey = agentService.getApiKey(); @@ - if (!apiKey) { - return { - satisfiesCriteria: false, - success: false, - error: 'API key not configured.' - }; - }Also applies to: 91-97
front_end/panels/ai_chat/core/ConfigurableGraph.ts (1)
67-67
: Avoid non-null assertions; validate required config up frontUsing
!
on possibly-undefinedprovider
andmodelName
can crash at runtime and violates lint rules. Validate early and use local consts.Apply:
export function createAgentGraphFromConfig( config: GraphConfig, ): CompiledGraph { logger.info(`Creating graph from config: ${config.name} with model: ${config.modelName}`); const graph = new StateGraph<AgentState>({ name: config.name }); + // Validate required fields for agent/tool execution + if (!config.provider || !config.modelName) { + throw new Error('[ConfigurableGraph] provider and modelName are required to build the graph'); + } + const provider = config.provider; + const modelName = config.modelName; + const nodeFactories: Record<string, (nodeConfig: GraphNodeConfig, graphInstance?: StateGraph<AgentState>) => Runnable<AgentState, AgentState>> = { - agent: () => createAgentNode(config.modelName!, config.provider!, config.temperature || 0), + agent: () => createAgentNode(modelName, provider, config.temperature || 0), @@ - const toolExecutorInstance = createToolExecutorNode(state, config.provider!, config.modelName!, config.miniModel, config.nanoModel); + const toolExecutorInstance = createToolExecutorNode(state, provider, modelName, config.miniModel, config.nanoModel);Also applies to: 107-107
front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
1465-1468
: Stop logging API key prefixesEven partial secrets should not be logged. Remove the prefix log.
Apply:
- logger.info('Retrieved API key:'); - logger.info('- Exists:', !!apiKey); - logger.info('- Length:', apiKey?.length || 0); - logger.info('- Prefix:', apiKey?.substring(0, 8) + '...' || 'none'); + logger.info('Retrieved API key:'); + logger.info('- Exists:', !!apiKey); + logger.info('- Length:', apiKey?.length || 0);
90-96
: Update central ModelOption and unify usages
- In
front_end/panels/ai_chat/LLM/LLMTypes.ts
, extendModelOption.type
to include'groq'
and'openrouter'
and adjustlabel
to be required.- In
front_end/panels/ai_chat/ui/AIChatPanel.ts
andfront_end/panels/ai_chat/ui/model_selector/ModelSelector.ts
, remove the localModelOption
interfaces and import the shared type:import type { ModelOption } from '../LLM/LLMTypes.js';This ensures a single source of truth and prevents drift.
🧹 Nitpick comments (43)
front_end/panels/ai_chat/core/AgentNodes.ts (2)
546-551
: Silence unexpected-any and remove trailing spaces at tool execution site.Current cast triggers
@typescript-eslint/no-explicit-any
and there are trailing spaces.- return await selectedTool.execute(toolArgs as any, { - provider: this.provider, - model: this.modelName, - miniModel: this.miniModel, - nanoModel: this.nanoModel - }); + // eslint-disable-next-line @typescript-eslint/no-explicit-any + return await selectedTool.execute(toolArgs as any, { + provider: this.provider, + model: this.modelName, + miniModel: this.miniModel, + nanoModel: this.nanoModel + });
79-79
: Fix log typo ("Couldn't").Minor readability polish.
- logger.warn('Coudnt find the answer'); + logger.warn("Couldn't find the answer");front_end/panels/ai_chat/ui/__tests__/SettingsDialogOpenRouterCache.test.ts (4)
5-7
: Fix import order per lint rule.Place LLMClient import before SettingsDialog.
-import {SettingsDialog} from '../SettingsDialog.js'; -import {LLMClient} from '../../LLM/LLMClient.js'; +import {LLMClient} from '../../LLM/LLMClient.js'; +import {SettingsDialog} from '../SettingsDialog.js';
12-13
: Avoid any[]. Type the capture of fetch calls.- let fetchOpenRouterModelsCalls: any[]; + let fetchOpenRouterModelsCalls: Array<{ apiKey: string; timestamp: number }>;
101-101
: Use boolean-specific chai asserts and remove trailing spaces.Aligns with repository lint rules.
- assert.strictEqual(cacheAge < CACHE_DURATION_MS, true); + assert.isTrue(cacheAge < CACHE_DURATION_MS); @@ - assert.strictEqual(cacheAge > CACHE_DURATION_MS, true); + assert.isTrue(cacheAge > CACHE_DURATION_MS); @@ - assert.strictEqual(shouldRefresh, false); + assert.isFalse(shouldRefresh); @@ - assert.strictEqual(shouldRefresh, true); + assert.isTrue(shouldRefresh); @@ - assert.strictEqual(shouldAttemptFetch, false); + assert.isFalse(shouldAttemptFetch); @@ - assert.strictEqual(cacheAge < CACHE_DURATION_MS, true); + assert.isTrue(cacheAge < CACHE_DURATION_MS); @@ - assert.strictEqual(cacheAge > CACHE_DURATION_MS, true); + assert.isTrue(cacheAge > CACHE_DURATION_MS); @@ - assert.strictEqual(cacheAge >= CACHE_DURATION_MS, true); + assert.isTrue(cacheAge >= CACHE_DURATION_MS); @@ - assert.strictEqual(cacheAge < CACHE_DURATION_MS, true); + assert.isTrue(cacheAge < CACHE_DURATION_MS);Also applies to: 111-111, 127-127, 140-140, 153-153, 191-191, 199-199, 210-210, 230-230
259-259
: Add newline at EOF.Prevents lint failure.
-}); +}); +front_end/panels/ai_chat/tools/Tools.ts (1)
2085-2093
: Align model fallback with nano/mini routing policy.These blocks still fallback to
ctx.model
. Confirm if broader policy requires nano-only for tool-time LLMs; if yes, drop the fallback and gate strictly onctx.nanoModel
(orminiModel
where intended).Also applies to: 2153-2161, 2233-2235, 2243-2245
front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts (1)
53-55
: Fix error message to match nanoModel requirement.Message references miniModel while code requires nanoModel.
- if (!ctx?.provider || !ctx.nanoModel) { - return { error: 'Missing LLM context (provider/miniModel) for AccessibilityTreeToMarkdownTool' }; + if (!ctx?.provider || !ctx.nanoModel) { + return { error: 'Missing LLM context (provider/nanoModel) for AccessibilityTreeToMarkdownTool' }; }Also applies to: 57-58
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)
848-903
: Function name mismatch: resolveUrlsWithLLM does not use an LLM.Consider renaming to reflect programmatic resolution.
front_end/panels/ai_chat/auth/OpenRouterOAuth.ts (2)
244-253
: Signup→Signin interception: preserve URL hash and avoid magic stringGood addition. Two small tighten-ups:
- Preserve any hash fragment on redirect.
- Avoid hardcoded storage key string in multiple places.
Apply this diff to preserve the hash and avoid a needless navigate when already at target:
if (url) { try { - await this.maybeRedirectSignupToSignin(url); + await this.maybeRedirectSignupToSignin(url); } catch (e) {And inside the helper (see separate diff below on Lines 333-339).
320-342
: Make redirect idempotent and include fragment; centralize storage key
- Append url.hash to keep full state.
- Skip navigate if computed target equals current href.
- Consider centralizing the flag key.
Apply this diff within the helper:
private static async maybeRedirectSignupToSignin(currentUrl: string): Promise<void> { try { const url = new URL(currentUrl); const hostMatches = /(^|\.)openrouter\.ai$/i.test(url.hostname); const isSignup = url.pathname.startsWith('/sign-up'); - const alreadyRedirected = sessionStorage.getItem('openrouter_signin_redirect_performed') === 'true'; + const alreadyRedirected = sessionStorage.getItem('openrouter_signin_redirect_performed') === 'true'; if (!hostMatches || !isSignup || alreadyRedirected) { return; } - const signInUrl = `https://openrouter.ai/sign-in${url.search || ''}`; + const signInUrl = `https://openrouter.ai/sign-in${url.search || ''}${url.hash || ''}`; + if (url.href === signInUrl) { + return; // no-op if already at target + } sessionStorage.setItem('openrouter_signin_redirect_performed', 'true'); if (this.isDevelopment()) { logger.info('Redirecting OpenRouter sign-up -> sign-in'); } await this.navigateToUrl(signInUrl); } catch { // Ignore parse/navigation errors } }Outside this hunk (not a diff here): define a constant near OAUTH_STORAGE_KEYS for 'openrouter_signin_redirect_performed' and use it in both places to avoid magic strings.
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2)
454-462
: Model resolution: good coverage; minor readabilityLogic for 'use-mini'/'use-nano'/function/string/fallback is solid. Consider extracting to a small helper to keep execute() lean and reuse in other call-sites if needed.
414-414
: Trailing whitespace triggers lintsSeveral lines have trailing spaces flagged by ESLint. Please strip them.
Also applies to: 427-427, 452-452, 457-457, 462-462, 466-466
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)
295-312
: Handoff model resolution duplicates logic; extract a shared helperThis mirrors resolution in ConfigurableAgentTool. Extract a shared
resolveModelName(targetConfig, defaults)
to avoid drift and ensure consistent errors/messages.I can draft the helper and apply it in both places if helpful.
front_end/panels/ai_chat/ui/SettingsDialog.ts (10)
1690-1750
: Cache age check: guard parse and support OAuth auto-refresh
- Guard
parseInt
and handle NaN gracefully.- Optional: if user is OAuth-authenticated (no API key input), allow auto-refresh via OAuth token, not just API key. Right now, OAuth users won’t auto-refresh here.
-const cacheAge = now - parseInt(cacheTimestamp, 10); +const ts = Number.parseInt(cacheTimestamp, 10); +if (Number.isNaN(ts)) { + logger.debug('OpenRouter cache timestamp invalid, refreshing…'); + await autoRefreshOpenRouterModels(); + return; +} +const cacheAge = now - ts;If we should support OAuth-driven refresh here, I can wire:
- read
openrouter_auth_method
- if
oauth
, dynamically importOpenRouterOAuth
, obtain token, and fetch models. Confirm desired flow.
1566-1592
: Use logger instead of console.error for OAuth flow errorsConsistency with the rest of the file and structured logging.
- } catch (error) { - console.error('OAuth flow error:', error); + } catch (error) { + logger.error('OAuth flow error:', error); oauthStatus.textContent = `Error: ${error instanceof Error ? error.message : 'Unknown error'}`;
1652-1655
: Clean up listeners/interval/styles when dialog closesWindow listeners and
statusUpdateInterval
are never removed; styles are appended each time. Add a single cleanup on dialog hide.+ const cleanup = () => { + window.removeEventListener('openrouter-oauth-success', handleOAuthSuccess); + window.removeEventListener('openrouter-oauth-error', handleOAuthError); + window.removeEventListener('openrouter-oauth-logout', handleOAuthLogout); + clearInterval(statusUpdateInterval); + oauthStyles.remove(); + }; + dialog.addEventListener?.('hidden', cleanup); + // Fallback if Dialog lacks an event: + closeButton.addEventListener('click', cleanup);If
UI.Dialog.Dialog
exposes a different hide/close event, adjust accordingly.Also applies to: 2348-2349, 3172-3175
3089-3140
: Remove duplicated Advanced Settings Toggle CSS blockThis CSS block is duplicated verbatim below; keep one copy to reduce bundle size and avoid maintenance drift.
- /* Advanced Settings Toggle styles */ - .advanced-settings-toggle-section { - padding: 16px 20px; - border-bottom: 1px solid var(--color-details-hairline); - background-color: var(--color-background-highlight); - } - - .advanced-settings-toggle-container { - display: flex; - align-items: center; - gap: 8px; - margin-bottom: 8px; - } - - .advanced-settings-checkbox { - margin: 0; - transform: scale(1.1); - } - - .advanced-settings-label { - font-weight: 500; - color: var(--color-text-primary); - cursor: pointer; - font-size: 14px; - }
2511-2527
: Update disclaimer: providers now include Groq and OpenRouterText still says “support OpenAI directly; LiteLLM proxy for 100+ models”. Reflect Groq and OpenRouter support to avoid confusing users.
- <strong>Model Support:</strong> We currently support OpenAI models directly. And we support LiteLLM as a proxy to access 100+ other models. + <strong>Model Support:</strong> We support OpenAI directly, and Groq and OpenRouter as first-class providers. LiteLLM can also proxy access to many other models.
3180-3205
: Avoidany[]
forupdateOpenRouterModels
and reuse a local typeProvide a minimal type for models (id + optional name). Also remove trailing spaces flagged by ESLint in this block.
+type OpenRouterModel = { id: string; name?: string }; + /** * Static method to update OpenRouter models programmatically (called after OAuth success) */ - static updateOpenRouterModels(openrouterModels: any[]): void { + static updateOpenRouterModels(openrouterModels: OpenRouterModel[]): void { try { logger.debug('Updating OpenRouter models programmatically...', openrouterModels.length); // Convert OpenRouter models to ModelOption format - const modelOptions: ModelOption[] = openrouterModels.map(model => ({ + const modelOptions: ModelOption[] = openrouterModels.map(model => ({ value: model.id, label: model.name || model.id, type: 'openrouter' as const }));
1695-1745
: Fix lint: trailing spaces and explicit return typeThere are multiple ESLint hits for trailing spaces and one for missing explicit return type (addressed above). Please run lint autofix or remove trailing spaces in the indicated lines.
Also applies to: 1755-1758, 1829-1831, 3183-3201
1690-1717
: Minor: specify radix for parseIntEven though base-10 is implied here, add radix for clarity.
-const cacheAge = now - parseInt(cacheTimestamp, 10); +const cacheAge = now - parseInt(cacheTimestamp, 10);(Keep
10
as you already did elsewhere.)
1488-1534
: Prevent style duplication: add an id before injecting stylesIf
show()
is re-opened, the style tag is added again. Guard by id.- const oauthStyles = document.createElement('style'); - oauthStyles.textContent = ` + let oauthStyles = document.getElementById('openrouter-oauth-styles') as HTMLStyleElement | null; + if (!oauthStyles) { + oauthStyles = document.createElement('style'); + oauthStyles.id = 'openrouter-oauth-styles'; + oauthStyles.textContent = ` .settings-divider { text-align: center; margin: 15px 0; color: var(--color-text-secondary); font-size: 12px; font-weight: bold; } ... - `; - document.head.appendChild(oauthStyles); + `; + document.head.appendChild(oauthStyles); + }
1685-1693
: Enable fetch via OAuth (optional UX improvement)The “Fetch OpenRouter Models” button disables without an API key; for OAuth users, consider enabling fetch when authenticated via OAuth as well.
Would you like me to add an OAuth-backed fetch path here using
OpenRouterOAuth.getAccessToken()
?front_end/panels/ai_chat/core/AgentService.ts (2)
21-22
: Import order: satisfy eslint import/orderPlace
../ui/AIChatPanel.js
import before local./Graph.js
import group to appease linter.-import {createAgentGraph} from './Graph.js'; import { createLogger } from './Logger.js'; import {type AgentState, createInitialState, createUserMessage} from './State.js'; ... -import { AIChatPanel } from '../ui/AIChatPanel.js'; +import { AIChatPanel } from '../ui/AIChatPanel.js'; +import {createAgentGraph} from './Graph.js';
205-206
: Only pass mini/nano when different from main (minor)Not required, but reduces noise in downstream graph/config logs.
- this.#graph = createAgentGraph(apiKey, modelName, selectedProvider, miniModel, nanoModel); + this.#graph = createAgentGraph( + apiKey, + modelName, + selectedProvider, + miniModel && miniModel !== modelName ? miniModel : undefined, + nanoModel && nanoModel !== miniModel ? nanoModel : undefined + );front_end/panels/ai_chat/core/Graph.ts (1)
24-24
: Avoid unused parameter; prefer options object to curb signature growth_apiKey is no longer used. Consider removing it (or marking it intentionally unused via lint config) and replacing the positional args with a single options object to keep the API stable as more variants are added.
-export function createAgentGraph(_apiKey: string | null, modelName: string, provider?: LLMProvider, miniModel?: string, nanoModel?: string): CompiledGraph { +export function createAgentGraph( + modelName: string, + opts?: { provider?: LLMProvider; miniModel?: string; nanoModel?: string } +): CompiledGraph {front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts (2)
63-67
: Guard against empty mainModel; reuse validated mini/nano accessorsgetSelectedModel() doesn’t run the panel’s validation. Fallback to the validated mini model when main is empty.
- const mainModel = AIChatPanel.instance().getSelectedModel(); + const mainModel = AIChatPanel.instance().getSelectedModel() || AIChatPanel.getMiniModel(); const miniModel = AIChatPanel.getMiniModel(); const nanoModel = AIChatPanel.getNanoModel();
68-76
: Address ESLint object-shorthand and trailing spaceUse shorthand for evaluationModel and remove trailing whitespace.
- evaluationModel: evaluationModel, + evaluationModel,front_end/panels/ai_chat/tools/CritiqueTool.ts (6)
51-51
: Fix typo in tool description"finalresponse" → "final response".
Apply:
- description = 'Evaluates if finalresponse satisfies the user\'s requirements and provides feedback if needed.'; + description = 'Evaluates if the final response satisfies the user\'s requirements and provides feedback if needed.';
168-173
: Align error message with nanoModel usageMessage still says “provider/miniModel” while the code requires nanoModel.
Apply:
- if (!ctx?.provider || !ctx.nanoModel) { - throw new Error('Missing LLM context (provider/miniModel) for requirements extraction'); + if (!ctx?.provider || !ctx.nanoModel) { + throw new Error('Missing LLM context (provider/nanoModel) for requirements extraction'); }
199-205
: Make JSON extraction robust (avoid greedy regex failures)Regex
\[(.*)\]
is brittle. Try full parse first, then fallback to slicing first/last brackets.Apply:
- // Parse the JSON array from the response - const requirementsMatch = response.text.match(/\[(.*)\]/s); - if (!requirementsMatch) { - return { success: false, requirements: [], error: 'Failed to parse requirements' }; - } - - const requirements = JSON.parse(requirementsMatch[0]); + // Parse the JSON array from the response (robust) + let requirements: string[] = []; + try { + requirements = JSON.parse(response.text.trim()); + } catch { + const start = response.text.indexOf('['); + const end = response.text.lastIndexOf(']'); + if (start === -1 || end === -1 || end <= start) { + return { success: false, requirements: [], error: 'Failed to parse requirements' }; + } + requirements = JSON.parse(response.text.slice(start, end + 1)); + }
275-280
: Fix copy-paste message; use “evaluation” and nanoModelApply:
- if (!ctx?.provider || !ctx.nanoModel) { - throw new Error('Missing LLM context (provider/miniModel) for requirements extraction'); + if (!ctx?.provider || !ctx.nanoModel) { + throw new Error('Missing LLM context (provider/nanoModel) for evaluation'); }
307-313
: Harden criteria JSON parsingPrefer direct JSON.parse with fallback to bracket slice; avoids partial-object regex pitfalls.
Apply:
- // Extract JSON object from the response - const jsonMatch = response.text.match(/\{[\s\S]*\}/); - if (!jsonMatch) { - return { success: false, error: 'Failed to parse evaluation criteria' }; - } - - const criteria = JSON.parse(jsonMatch[0]) as EvaluationCriteria; + // Extract JSON object from the response (robust) + let criteria: EvaluationCriteria; + try { + criteria = JSON.parse(response.text.trim()) as EvaluationCriteria; + } catch { + const start = response.text.indexOf('{'); + const end = response.text.lastIndexOf('}'); + if (start === -1 || end === -1 || end <= start) { + return { success: false, error: 'Failed to parse evaluation criteria' }; + } + criteria = JSON.parse(response.text.slice(start, end + 1)) as EvaluationCriteria; + }
350-355
: Fix copy-paste message; use “feedback generation” and nanoModelApply:
- if (!ctx?.provider || !ctx.nanoModel) { - throw new Error('Missing LLM context (provider/miniModel) for requirements extraction'); + if (!ctx?.provider || !ctx.nanoModel) { + throw new Error('Missing LLM context (provider/nanoModel) for feedback generation'); }front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
1535-1573
: Auto-fetch OpenRouter models: solid; consider event consolidationWorks as intended. Minor: there are now two events ('openrouter-models-updated' from SettingsDialog and 'openrouter-models-fetched' here). If feasible, standardize to one to reduce listeners.
1028-1028
: Fix trailing whitespace to satisfy ESLintClean up trailing spaces on these lines.
You can run the formatter/linter:
- pnpm lint --fix
- or configure your editor to trim trailing whitespace on save.
Also applies to: 1039-1039, 1541-1541, 1547-1547, 1550-1550, 1553-1553, 1562-1562, 1567-1567
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (5)
11-13
: Fix import order and inline the type import to satisfy linting.Place the LLMClient import after Constants and inline the type import.
-import { LLMClient } from '../../LLM/LLMClient.js'; -import type { LLMProviderConfig } from '../../LLM/LLMClient.js'; import { TIMING_CONSTANTS } from '../../core/Constants.js'; +import { LLMClient, type LLMProviderConfig } from '../../LLM/LLMClient.js'; import { AIChatPanel } from '../../ui/AIChatPanel.js';Also applies to: 16-16
42-47
: Source-of-truth for model routing should include provider, not just model IDs.Downstream tools often need provider+model pairs. Prefer using getMiniModelWithProvider()/getNanoModelWithProvider() and threading provider through the config, or ensure the Graph resolves providers for these model names.
Would you confirm that the evaluation path (GenericToolEvaluator → tool graph) correctly resolves providers for main/mini/nano when only model strings are provided?
48-56
: Shorthand for evaluationModel; otherwise OK.Minor lint fix: use property shorthand for evaluationModel.
this.config = { extractionModel: evaluationModel, extractionApiKey: apiKey, - evaluationModel: evaluationModel, + evaluationModel, evaluationApiKey: apiKey, mainModel, miniModel, nanoModel,
84-122
: Type-safety and style: remove any; add braces; tighten provider typing.
- Replace the any-cast for provider with a typed union derived from LLMProviderConfig.
- Add braces to single-line if statements to satisfy curly and reduce diff risk.
async #initializeLLMForEvaluation(): Promise<void> { try { - const provider = (localStorage.getItem('ai_chat_provider') || 'openai') as any; + type ProviderName = LLMProviderConfig['provider']; + const provider = (localStorage.getItem('ai_chat_provider') || 'openai') as ProviderName; const providers: LLMProviderConfig[] = []; - if (provider === 'openai') { - const key = localStorage.getItem('ai_chat_api_key') || ''; - if (key) providers.push({ provider: 'openai', apiKey: key }); - } else if (provider === 'openrouter') { - const key = localStorage.getItem('ai_chat_openrouter_api_key') || ''; - if (key) providers.push({ provider: 'openrouter', apiKey: key }); - } else if (provider === 'groq') { - const key = localStorage.getItem('ai_chat_groq_api_key') || ''; - if (key) providers.push({ provider: 'groq', apiKey: key }); - } else if (provider === 'litellm') { - const key = localStorage.getItem('ai_chat_litellm_api_key') || ''; - const endpoint = localStorage.getItem('ai_chat_litellm_endpoint') || ''; - if (endpoint) providers.push({ provider: 'litellm', apiKey: key, providerURL: endpoint }); - } + if (provider === 'openai') { + const key = localStorage.getItem('ai_chat_api_key') || ''; + if (key) { + providers.push({ provider: 'openai', apiKey: key }); + } + } else if (provider === 'openrouter') { + const key = localStorage.getItem('ai_chat_openrouter_api_key') || ''; + if (key) { + providers.push({ provider: 'openrouter', apiKey: key }); + } + } else if (provider === 'groq') { + const key = localStorage.getItem('ai_chat_groq_api_key') || ''; + if (key) { + providers.push({ provider: 'groq', apiKey: key }); + } + } else if (provider === 'litellm') { + const key = localStorage.getItem('ai_chat_litellm_api_key') || ''; + const endpoint = localStorage.getItem('ai_chat_litellm_endpoint') || ''; + if (endpoint) { + providers.push({ provider: 'litellm', apiKey: key, providerURL: endpoint }); + } + } // Fallback to OpenAI if specific selection is not configured but an OpenAI key exists - if (!providers.length) { - const fallback = localStorage.getItem('ai_chat_api_key') || ''; - if (fallback) providers.push({ provider: 'openai', apiKey: fallback }); - } + if (!providers.length) { + const fallback = localStorage.getItem('ai_chat_api_key') || ''; + if (fallback) { + providers.push({ provider: 'openai', apiKey: fallback }); + } + } - if (!providers.length) { + if (!providers.length) { logger.warn('LLMClient initialization skipped: no provider credentials found'); return; }Please confirm LiteLLM requires/does not require an API key when only providerURL is present. If it requires a key, we should also guard on key being truthy before pushing the provider config.
456-456
: Avoid any-cast on globalThis.Use a safer cast to satisfy @typescript-eslint/no-explicit-any.
-(globalThis as any).EvaluationRunner = EvaluationRunner; +(globalThis as unknown as Record<string, unknown>).EvaluationRunner = EvaluationRunner;front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts (1)
381-388
: Sanity-check model choice for content generation.Content writer and other high-output agents now use the mini model. If that’s intentional for cost/speed, ignore. If quality is a concern, consider 'use-main' for those agents.
Also applies to: 421-425
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
front_end/panels/ai_chat/BUILD.gn
(1 hunks)front_end/panels/ai_chat/agent_framework/AgentRunner.ts
(7 hunks)front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts
(2 hunks)front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts
(10 hunks)front_end/panels/ai_chat/auth/OpenRouterOAuth.ts
(4 hunks)front_end/panels/ai_chat/core/AgentNodes.ts
(4 hunks)front_end/panels/ai_chat/core/AgentService.ts
(2 hunks)front_end/panels/ai_chat/core/ConfigurableGraph.ts
(2 hunks)front_end/panels/ai_chat/core/Graph.ts
(2 hunks)front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts
(3 hunks)front_end/panels/ai_chat/evaluation/framework/types.ts
(1 hunks)front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
(5 hunks)front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts
(2 hunks)front_end/panels/ai_chat/tools/CritiqueTool.ts
(3 hunks)front_end/panels/ai_chat/tools/FullPageAccessibilityTreeToMarkdownTool.ts
(1 hunks)front_end/panels/ai_chat/tools/HTMLToMarkdownTool.ts
(1 hunks)front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts
(1 hunks)front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts
(2 hunks)front_end/panels/ai_chat/tools/Tools.ts
(2 hunks)front_end/panels/ai_chat/ui/AIChatPanel.ts
(3 hunks)front_end/panels/ai_chat/ui/SettingsDialog.ts
(8 hunks)front_end/panels/ai_chat/ui/__tests__/SettingsDialogOpenRouterCache.test.ts
(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-29T23:08:07.342Z
Learnt from: CR
PR: BrowserOperator/browser-operator-core#0
File: eval-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-08-29T23:08:07.342Z
Learning: Applies to eval-server/nodejs/src/evaluator.js : LLM evaluator must return structured JSON with scores and reasoning
Applied to files:
front_end/panels/ai_chat/tools/CritiqueTool.ts
📚 Learning: 2025-08-29T23:08:07.342Z
Learnt from: CR
PR: BrowserOperator/browser-operator-core#0
File: eval-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-08-29T23:08:07.342Z
Learning: Applies to eval-server/nodejs/src/evaluator.js : LLM evaluator must integrate with the OpenAI API to judge agent responses
Applied to files:
front_end/panels/ai_chat/tools/CritiqueTool.ts
front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
📚 Learning: 2025-08-29T23:08:07.342Z
Learnt from: CR
PR: BrowserOperator/browser-operator-core#0
File: eval-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-08-29T23:08:07.342Z
Learning: Applies to eval-server/nodejs/src/evaluator.js : LLM evaluator must assess responses on correctness, completeness, clarity, relevance, and helpfulness
Applied to files:
front_end/panels/ai_chat/tools/CritiqueTool.ts
🧬 Code graph analysis (17)
front_end/panels/ai_chat/core/ConfigurableGraph.ts (1)
front_end/panels/ai_chat/core/AgentNodes.ts (1)
createToolExecutorNode
(409-746)
front_end/panels/ai_chat/core/Graph.ts (2)
front_end/panels/ai_chat/LLM/LLMTypes.ts (1)
LLMProvider
(142-142)front_end/panels/ai_chat/core/Types.ts (1)
CompiledGraph
(26-29)
front_end/panels/ai_chat/core/AgentService.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (4)
AIChatPanel
(306-2366)apiKey
(1136-1167)apiKey
(1209-1228)apiKey
(1538-1572)front_end/panels/ai_chat/core/Graph.ts (1)
createAgentGraph
(24-42)
front_end/panels/ai_chat/ui/__tests__/SettingsDialogOpenRouterCache.test.ts (3)
front_end/panels/ai_chat/ui/AIChatPanel.ts (5)
window
(1017-1064)apiKey
(1136-1167)apiKey
(1209-1228)apiKey
(1538-1572)SettingsDialog
(2052-2066)front_end/panels/ai_chat/LLM/LLMClient.ts (1)
LLMClient
(49-385)front_end/panels/ai_chat/ui/SettingsDialog.ts (1)
SettingsDialog
(362-3206)
front_end/panels/ai_chat/tools/CritiqueTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
provider
(1420-1483)model
(1740-1753)
front_end/panels/ai_chat/tools/Tools.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (7)
logger
(919-1012)logger
(1300-1363)logger
(1386-1413)logger
(1520-1533)logger
(1577-1580)provider
(1420-1483)model
(1740-1753)
front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
AIChatPanel
(306-2366)
front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts (2)
front_end/panels/ai_chat/LLM/LLMTypes.ts (1)
LLMProvider
(142-142)front_end/panels/ai_chat/tools/Tools.ts (1)
LLMContext
(54-60)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (1)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)
AgentRunnerConfig
(25-40)
front_end/panels/ai_chat/core/AgentNodes.ts (5)
front_end/panels/ai_chat/core/State.ts (1)
AgentState
(28-35)front_end/panels/ai_chat/LLM/LLMTypes.ts (1)
LLMProvider
(142-142)front_end/panels/ai_chat/core/Types.ts (1)
Runnable
(10-12)front_end/panels/ai_chat/tools/Tools.ts (1)
getTools
(4163-4216)front_end/panels/ai_chat/tracing/TracingConfig.ts (1)
createTracingProvider
(225-227)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
provider
(1420-1483)model
(1740-1753)
front_end/panels/ai_chat/auth/OpenRouterOAuth.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (5)
logger
(919-1012)logger
(1300-1363)logger
(1386-1413)logger
(1520-1533)logger
(1577-1580)
front_end/panels/ai_chat/ui/AIChatPanel.ts (4)
front_end/panels/ai_chat/LLM/LLMClient.ts (1)
LLMClient
(49-385)front_end/panels/ai_chat/ui/SettingsDialog.ts (1)
SettingsDialog
(362-3206)front_end/panels/ai_chat/ui/input/InputBar.ts (1)
modelOptions
(35-35)front_end/panels/ai_chat/LLM/LLMTypes.ts (1)
ModelOption
(88-92)
front_end/panels/ai_chat/ui/SettingsDialog.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (6)
localStorage
(784-807)error
(1890-1903)ModelOption
(91-95)model
(1740-1753)updateModelOptions
(486-605)window
(1017-1064)front_end/panels/ai_chat/LLM/LLMClient.ts (1)
LLMClient
(49-385)
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
provider
(1420-1483)model
(1740-1753)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (5)
logger
(919-1012)logger
(1300-1363)logger
(1386-1413)logger
(1520-1533)logger
(1577-1580)front_end/panels/ai_chat/tracing/TracingConfig.ts (1)
getCurrentTracingContext
(299-301)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (7)
AIChatPanel
(306-2366)apiKey
(1136-1167)apiKey
(1209-1228)apiKey
(1538-1572)provider
(1420-1483)localStorage
(784-807)error
(1890-1903)front_end/panels/ai_chat/LLM/LLMClient.ts (2)
LLMProviderConfig
(19-23)LLMClient
(49-385)
🪛 ESLint
front_end/panels/ai_chat/evaluation/framework/types.ts
[error] 167-167: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/core/ConfigurableGraph.ts
[error] 107-107: Forbidden non-null assertion.
(@typescript-eslint/no-non-null-assertion)
[error] 107-107: Forbidden non-null assertion.
(@typescript-eslint/no-non-null-assertion)
front_end/panels/ai_chat/core/AgentService.ts
[error] 21-21: ../ui/AIChatPanel.js
import should occur before import of ./Graph.js
(import/order)
front_end/panels/ai_chat/ui/__tests__/SettingsDialogOpenRouterCache.test.ts
[error] 6-6: ../../LLM/LLMClient.js
import should occur before import of ../SettingsDialog.js
(import/order)
[error] 12-12: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 100-100: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 101-101: Use assert.isTrue
instead of assert.strictEqual
to check that a value is true
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 110-110: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 111-111: Use assert.isTrue
instead of assert.strictEqual
to check that a value is true
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 126-126: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 127-127: Use assert.isFalse
instead of assert.strictEqual
to check that a value is false
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 139-139: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 140-140: Use assert.isTrue
instead of assert.strictEqual
to check that a value is true
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 150-150: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 153-153: Use assert.isFalse
instead of assert.strictEqual
to check that a value is false
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 165-165: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 177-177: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 191-191: Use assert.isTrue
instead of assert.strictEqual
to check that a value is true
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 199-199: Use assert.isTrue
instead of assert.strictEqual
to check that a value is true
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 208-208: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 210-210: Use assert.isTrue
instead of assert.strictEqual
to check that a value is true
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 230-230: Use assert.isTrue
instead of assert.strictEqual
to check that a value is true
(rulesdir/no-assert-equal-boolean-null-undefined)
[error] 248-248: 'firstTimestamp' is never reassigned. Use 'const' instead.
(prefer-const)
[error] 252-252: 'secondTimestamp' is never reassigned. Use 'const' instead.
(prefer-const)
[error] 259-259: Newline required at end of file but not found.
(@stylistic/eol-last)
front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts
[error] 71-71: Expected property shorthand.
(object-shorthand)
[error] 71-71: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts
[error] 6-6: Type imports must be imported in the same import statement as values, using the type keyword
(rulesdir/inline-type-imports)
[error] 6-6: '../../tools/Tools.js' imported multiple times.
(import/no-duplicates)
[error] 7-7: ../../LLM/LLMTypes.js
type import should occur before type import of ../../tools/Tools.js
(import/order)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts
[error] 414-414: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 417-417: import()
type annotations are forbidden.
(@typescript-eslint/consistent-type-imports)
[error] 417-417: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 418-418: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 419-419: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 420-420: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 421-421: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 422-422: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 423-423: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 424-424: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 425-425: Expected a comma.
(@stylistic/member-delimiter-style)
[error] 427-427: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 452-452: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 457-457: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 462-462: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 466-466: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 493-493: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/core/AgentNodes.ts
[error] 546-546: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 546-546: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 547-547: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/ui/AIChatPanel.ts
[error] 1028-1028: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1039-1039: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1541-1541: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1547-1547: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1550-1550: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1553-1553: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1562-1562: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1567-1567: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/ui/SettingsDialog.ts
[error] 1695-1695: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1702-1702: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1705-1705: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1723-1723: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1728-1728: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1731-1731: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1738-1738: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1742-1742: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1745-1745: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1753-1753: Missing return type on function.
(@typescript-eslint/explicit-function-return-type)
[error] 1755-1755: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1758-1758: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1829-1829: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1831-1831: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3180-3180: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 3183-3183: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3190-3190: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3194-3194: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3199-3199: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3201-3201: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts
[error] 978-978: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
[error] 11-11: ../../LLM/LLMClient.js
import should occur after import of ../../core/Constants.js
(import/order)
[error] 12-12: ../../LLM/LLMClient.js
type import should occur after import of ../../core/Constants.js
(import/order)
[error] 12-12: Type imports must be imported in the same import statement as values, using the type keyword
(rulesdir/inline-type-imports)
[error] 51-51: Expected property shorthand.
(object-shorthand)
[error] 87-87: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 92-92: Expected { after 'if' condition.
(curly)
[error] 95-95: Expected { after 'if' condition.
(curly)
[error] 98-98: Expected { after 'if' condition.
(curly)
[error] 102-102: Expected { after 'if' condition.
(curly)
[error] 108-108: Expected { after 'if' condition.
(curly)
[error] 456-456: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
], | ||
maxIterations: 15, | ||
modelName: () => AIChatPanel.getMiniModel(), | ||
modelName: 'use-mini', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
'use-mini' sentinel: verify runtime resolution and avoid string duplication.
- Ensure the graph/runtime maps the sentinel 'use-mini' to the active miniModel before provider calls; otherwise providers will receive a literal 'use-mini' and fail.
- Centralize the sentinel as a shared constant to prevent typos and ease future changes.
Verification script to confirm mapping exists and find all sentinel usages:
If mapping logic is missing, I can draft a minimal resolver (e.g., resolveModelName: 'use-mini'|'use-nano'|'use-main' → actual model) and wire it into Graph/ConfigurableAgentTool.
Also applies to: 424-424, 532-532, 641-641, 727-727, 806-806, 882-882, 967-967, 1050-1050, 1423-1423
🏁 Script executed:
#!/bin/bash
# Look for sentinel usage and resolution logic across the repo
rg -nP -C3 'use-mini|use-nano|miniModel|nanoModel|resolve.*model|route.*model|getMiniModelWithProvider|getNanoModelWithProvider'
Length of output: 277228
Add shared sentinel constant and resolver for modelName ‘use-mini’.
No mapping or resolution logic for the ‘use-mini’ (or ‘use-nano’/other) sentinels exists—our grep only hit a sourcemap—so providers will receive the literal string and likely fail at runtime. Centralize these values (e.g. export const MINI_MODEL = 'use-mini') and implement a resolveModelName
function that translates 'use-mini'|'use-nano'|'use-main'
into the actual model identifier before instantiating providers. Update all occurrences at lines 323, 424, 532, 641, 727, 806, 882, 967, 1050, and 1423 to use the shared constant and resolver.
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts
around lines 323, 424, 532, 641, 727, 806, 882, 967, 1050, and 1423: add a
shared exported sentinel constant (e.g. export const MINI_MODEL = 'use-mini')
and implement an exported resolver function (e.g. resolveModelName(sent:
string): string) that maps sentinels 'use-mini'|'use-nano'|'use-main' to real
provider model identifiers; then replace literal 'use-mini' occurrences at the
listed lines with the shared constant and pass model names through
resolveModelName before instantiating providers so providers receive the actual
model id rather than the sentinel.
import type { Tool } from '../../tools/Tools.js'; | ||
import type { LLMContext } from '../../tools/Tools.js'; | ||
import type { LLMProvider } from '../../LLM/LLMTypes.js'; | ||
import { NavigateURLTool } from '../../tools/Tools.js'; | ||
import type { TestCase, TestResult, EvaluationConfig } from './types.js'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix duplicate imports and inline type imports (ESLint errors)
Combine imports from ../../tools/Tools.js and place the LLMTypes type import first to satisfy import/no-duplicates and inline-type-imports.
-import type { Tool } from '../../tools/Tools.js';
-import type { LLMContext } from '../../tools/Tools.js';
-import type { LLMProvider } from '../../LLM/LLMTypes.js';
-import { NavigateURLTool } from '../../tools/Tools.js';
+import type { LLMProvider } from '../../LLM/LLMTypes.js';
+import { NavigateURLTool, type LLMContext, type Tool } from '../../tools/Tools.js';
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import type { Tool } from '../../tools/Tools.js'; | |
import type { LLMContext } from '../../tools/Tools.js'; | |
import type { LLMProvider } from '../../LLM/LLMTypes.js'; | |
import { NavigateURLTool } from '../../tools/Tools.js'; | |
import type { TestCase, TestResult, EvaluationConfig } from './types.js'; | |
import type { LLMProvider } from '../../LLM/LLMTypes.js'; | |
import { NavigateURLTool, type LLMContext, type Tool } from '../../tools/Tools.js'; | |
import type { TestCase, TestResult, EvaluationConfig } from './types.js'; |
🧰 Tools
🪛 ESLint
[error] 5-5: '../../tools/Tools.js' imported multiple times.
(import/no-duplicates)
[error] 6-6: Type imports must be imported in the same import statement as values, using the type keyword
(rulesdir/inline-type-imports)
[error] 6-6: '../../tools/Tools.js' imported multiple times.
(import/no-duplicates)
[error] 7-7: ../../LLM/LLMTypes.js
type import should occur before type import of ../../tools/Tools.js
(import/order)
[error] 8-8: There should be at least one empty line between import groups
(import/order)
[error] 9-9: There should be at least one empty line between import groups
(import/order)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts around
lines 5 to 9, there are duplicate and inconsistent imports from
../../tools/Tools.js causing ESLint import/no-duplicates and inline-type-imports
errors; fix by first importing the LLM types: import type { LLMProvider } from
'../../LLM/LLMTypes.js', then combine the tools imports into a single statement
that separates type-only imports and value imports (e.g., use import type {
Tool, LLMContext } from '../../tools/Tools.js' and a single import {
NavigateURLTool } from '../../tools/Tools.js'), removing the duplicate import
lines.
const toolResult = await ErrorHandlingUtils.withErrorHandling( | ||
async () => { | ||
return await tool.execute(testCase.input); | ||
// Build LLM context for tools that require LLM calls (extraction/refinement, etc.) | ||
const provider = (localStorage.getItem('ai_chat_provider') as LLMProvider | null) || 'openai'; | ||
const ctx: LLMContext = { | ||
provider, | ||
model: this.config.mainModel, | ||
miniModel: this.config.miniModel, | ||
nanoModel: this.config.nanoModel, | ||
}; | ||
return await tool.execute(testCase.input, ctx); | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Confirm tool.execute signature across all tools
Since ctx is now passed as a second arg, verify every Tool implements execute(input, ctx) or that the second param is optional in the Tool interface.
🏁 Script executed:
#!/bin/bash
# Find tool classes whose execute method still takes a single parameter
# (uses AST where possible; falls back to regex for breadth)
echo "Two-arg execute methods:"
ast-grep --lang ts --pattern $'class $_ {\n $$$\n execute($_, $_) {\n $$$\n }\n $$$\n}' || true
echo
echo "Single-arg execute methods (to update):"
ast-grep --lang ts --pattern $'class $_ {\n $$$\n execute($_) {\n $$$\n }\n $$$\n}' || true
echo
echo "Type interface check for Tool.execute:"
rg -nP -C2 'interface\s+Tool\b|type\s+Tool\s*=\s*{|\bexecute\s*\(' front_end/panels/ai_chat/tools
Length of output: 25538
Add optional ctx
parameter to Tool.execute in interface and implementations
GenericToolEvaluator now invokes tool.execute(args, ctx)
. Update the Tool
interface to declare
execute(args: TArgs, ctx?: LLMContext): Promise<TResult>
and add ctx?: LLMContext
to the following implementations that currently only accept one argument:
- front_end/panels/ai_chat/tools/FinalizeWithCritiqueTool.ts (line 77)
- front_end/panels/ai_chat/tools/DocumentSearchTool.ts (line 90)
- front_end/panels/ai_chat/tools/BookmarkStoreTool.ts (line 75)
also update any built-in tools inTools.ts
missing thectx
parameter.
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts around
lines 139 to 150, GenericToolEvaluator now calls tool.execute(args, ctx) but the
Tool interface and several implementations still declare execute(args) without
the optional ctx; update the Tool interface signature to execute(args: TArgs,
ctx?: LLMContext): Promise<TResult>, then update the listed implementations to
accept the optional second parameter
(front_end/panels/ai_chat/tools/FinalizeWithCritiqueTool.ts at ~line 77,
front_end/panels/ai_chat/tools/DocumentSearchTool.ts at ~line 90,
front_end/panels/ai_chat/tools/BookmarkStoreTool.ts at ~line 75) and any
built-in tools in Tools.ts to add ctx?: LLMContext to their execute signatures
and propagate/use it as needed (or ignore it if unused) so calls from
GenericToolEvaluator compile.
// Build LLM context for tools that require LLM calls (extraction/refinement, etc.) | ||
const provider = (localStorage.getItem('ai_chat_provider') as LLMProvider | null) || 'openai'; | ||
const ctx: LLMContext = { | ||
provider, | ||
model: this.config.mainModel, | ||
miniModel: this.config.miniModel, | ||
nanoModel: this.config.nanoModel, | ||
}; | ||
return await tool.execute(testCase.input, ctx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden provider sourcing and model fallback; avoid ReferenceError in non-DOM contexts
Accessing localStorage unguarded can throw outside the browser. Also validate provider and provide a safe model fallback if mainModel is unset.
- // Build LLM context for tools that require LLM calls (extraction/refinement, etc.)
- const provider = (localStorage.getItem('ai_chat_provider') as LLMProvider | null) || 'openai';
- const ctx: LLMContext = {
- provider,
- model: this.config.mainModel,
- miniModel: this.config.miniModel,
- nanoModel: this.config.nanoModel,
- };
+ // Build LLM context for tools that require LLM calls (extraction/refinement, etc.)
+ const rawProvider = typeof localStorage !== 'undefined'
+ ? localStorage.getItem('ai_chat_provider')
+ : null;
+ const provider: LLMProvider =
+ rawProvider === 'openai' || rawProvider === 'litellm' || rawProvider === 'groq' || rawProvider === 'openrouter'
+ ? rawProvider
+ : 'openai';
+ const model =
+ this.config.mainModel ?? this.config.miniModel ?? this.config.nanoModel ?? 'gpt-4.1-mini';
+ const ctx: LLMContext = {
+ provider,
+ model,
+ miniModel: this.config.miniModel,
+ nanoModel: this.config.nanoModel,
+ };
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Build LLM context for tools that require LLM calls (extraction/refinement, etc.) | |
const provider = (localStorage.getItem('ai_chat_provider') as LLMProvider | null) || 'openai'; | |
const ctx: LLMContext = { | |
provider, | |
model: this.config.mainModel, | |
miniModel: this.config.miniModel, | |
nanoModel: this.config.nanoModel, | |
}; | |
return await tool.execute(testCase.input, ctx); | |
// Build LLM context for tools that require LLM calls (extraction/refinement, etc.) | |
const rawProvider = typeof localStorage !== 'undefined' | |
? localStorage.getItem('ai_chat_provider') | |
: null; | |
const provider: LLMProvider = | |
rawProvider === 'openai' | |
|| rawProvider === 'litellm' | |
|| rawProvider === 'groq' | |
|| rawProvider === 'openrouter' | |
? rawProvider | |
: 'openai'; | |
const model = | |
this.config.mainModel | |
?? this.config.miniModel | |
?? this.config.nanoModel | |
?? 'gpt-4.1-mini'; | |
const ctx: LLMContext = { | |
provider, | |
model, | |
miniModel: this.config.miniModel, | |
nanoModel: this.config.nanoModel, | |
}; | |
return await tool.execute(testCase.input, ctx); |
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts around
lines 141 to 149, accessing localStorage unguarded can throw in non-DOM contexts
and mainModel may be unset; replace the direct localStorage read with a safe
check (e.g. ensure typeof window !== 'undefined' and window.localStorage exists)
before attempting to read, validate the retrieved value against the allowed
LLMProvider values and fall back to a safe default like 'openai' if invalid or
missing, and compute the model to use by falling back from this.config.mainModel
to this.config.miniModel to this.config.nanoModel and finally to a constant
default model so you never pass an undefined model to the LLMContext; then build
ctx with the validated provider and deterministic model and call
tool.execute(testCase.input, ctx).
// Model settings for tools and agents under test | ||
mainModel: string; | ||
miniModel: string; | ||
nanoModel: string; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make new EvaluationConfig model fields optional to avoid breaking callers
These new required fields can break existing configs/tests. Mark them optional (or provide defaults at usage sites).
- mainModel: string;
- miniModel: string;
- nanoModel: string;
+ mainModel?: string;
+ miniModel?: string;
+ nanoModel?: string;
Also remove the trailing space at Line 167 per ESLint.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Model settings for tools and agents under test | |
mainModel: string; | |
miniModel: string; | |
nanoModel: string; | |
// Model settings for tools and agents under test | |
mainModel?: string; | |
miniModel?: string; | |
nanoModel?: string; | |
🧰 Tools
🪛 ESLint
[error] 167-167: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/evaluation/framework/types.ts around lines 163 to
167, the newly added mainModel, miniModel, and nanoModel fields are required
which may break existing configs/tests; make these fields optional (e.g., add ?
to their declarations) or ensure defaults are provided at usage sites, and
remove the trailing space at line 167 to satisfy ESLint.
if (!ctx?.provider || !ctx.miniModel) { | ||
throw new Error('Missing LLM context (provider/miniModel) for URL retry extraction'); | ||
} | ||
const provider = ctx.provider; | ||
const model = ctx.miniModel || ctx.model; | ||
const model = ctx.miniModel; | ||
const llmResponse = await callLLMWithTracing( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enforce miniModel and parse result in URL-retry LLM call.
Parsing ensures subsequent resolution works.
- const llmResponse = await callLLMWithTracing(
+ const llmResponse = await callLLMWithTracing(
{
provider,
model,
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: extractionPrompt }
],
systemPrompt: systemPrompt,
temperature: 0.1,
options: { retryConfig: { maxRetries: 3, baseDelayMs: 1500 } }
},
{
toolName: this.name,
operationName: 'retry_url_resolution',
context: 'url_resolution_retry',
additionalMetadata: {
attemptNumber,
unresolvedCount: unresolvedNodeIds.length,
failedNodeIds: unresolvedNodeIds
}
}
);
- const result = llmResponse.text;
-
- return result;
+ const text = llmResponse.text || '';
+ return JSON.parse(text);
Also applies to: 414-417
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts around lines
386-391 and 414-417, the URL-retry LLM calls proceed without enforcing that
ctx.miniModel exists and without parsing the LLM response before using it;
update both call sites to (1) throw if ctx.miniModel is falsy (already checked
for provider but ensure miniModel is validated), (2) pass the provider and
miniModel variables explicitly to the LLM call, and (3) parse the LLM call
result (e.g., JSON.parse or a safe parser) and validate the parsed shape before
continuing so subsequent resolution logic operates on a confirmed object rather
than an unparsed string or undefined value.
Claude finished @tysonthomas9's task —— View job PR Code ReviewComprehensive review of the model config, OpenRouter OAuth, and model refresh fixes. Review Checklist:
Code Quality & Best Practices✅ Strengths
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts (1)
446-456
: You merge per-request model overrides but never use them in chat.
mergedInput
pullsparams.model.*
, butexecuteChatEvaluation
ignores these and always uses constructor models. If per-evaluation overrides should be honored, thread them through.Apply this minimal change in
executeChatEvaluation
:- const modelName = this.judgeModel; - const miniModel = this.miniModel; - const nanoModel = this.nanoModel; + // Prefer per-request overrides; fall back to constructor defaults + const modelName = input.main_model || this.judgeModel; + const miniModel = input.mini_model || this.miniModel || modelName; + const nanoModel = input.nano_model || this.nanoModel || miniModel;front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts (1)
148-170
: Gate vision evaluation by provider before calling OpenAI
Only invoke the OpenAI Vision API when the resolved provider isopenai
; fall back to text-only evaluation for all others.- // For vision evaluation, we use OpenAI API directly since UnifiedLLMClient may not support multimodal yet - const response = await this.callVisionAPI(model, messages, llmConfig.temperature ?? 0.1); + // Resolve provider; only call OpenAI Vision when provider is 'openai' + const provider = await this.#getProviderForModel(model); + if (provider !== 'openai') { + logger.warn(`Vision eval skipped for provider=${provider}. Falling back to text-only.`); + return await this.evaluateTextOnly(output, testCase, _criteria, model, llmConfig); + } + const response = await this.callVisionAPI(model, messages, llmConfig.temperature ?? 0.1);front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (1)
36-44
: Don’t hard-require API key when provider doesn’t need one (LiteLLM endpoint-only)Throwing if no apiKey blocks valid LiteLLM endpoint configs. Use provider validation instead.
- const apiKey = agentService.getApiKey(); - - if (!apiKey) { - throw new Error('API key not configured. Please configure in AI Chat settings.'); - } + const apiKey = agentService.getApiKey(); + const selectedProvider = (localStorage.getItem('ai_chat_provider') || 'openai') as 'openai'|'openrouter'|'groq'|'litellm'; + const valid = LLMClient.validateProviderCredentials(selectedProvider); + if (!valid.isValid) { + throw new Error(valid.message || 'Provider credentials are not configured. Please update Settings.'); + }front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
49-78
: Remove localStorage monkey-patch and secret logging (high risk)Overriding localStorage methods and logging value previews can leak API keys/PII and destabilize third-party code. Gate behind an explicit debug flag and mask values, or remove entirely.
-class StorageMonitor { +class StorageMonitor { private static instance: StorageMonitor | null = null; private originalSetItem: typeof localStorage.setItem; private originalRemoveItem: typeof localStorage.removeItem; @@ private setupStorageMonitoring(): void { - // Monitor setItem operations - localStorage.setItem = (key: string, value: string) => { + // Enable only when explicitly opted-in + const enabled = localStorage.getItem('ai_chat_debug_storage_monitor') === '1'; + if (!enabled) return; + const mask = (v: string) => (v ? `${v.slice(0, 2)}…${v.slice(-2)}` : ''); + // Monitor setItem operations (masked) + localStorage.setItem = (key: string, value: string) => { if (key.includes('openrouter') || key.includes('ai_chat')) { logger.debug(`=== LOCALSTORAGE SET ===`); logger.debug(`Key: ${key}`); logger.debug(`Value exists: ${!!value}`); - logger.debug(`Value length: ${value?.length || 0}`); - logger.debug(`Value preview: ${value?.substring(0, 50) + (value?.length > 50 ? '...' : '') || 'null'}`); + logger.debug(`Value length: ${value?.length || 0}`); + logger.debug(`Value masked: ${mask(value)}`); logger.debug(`Timestamp: ${new Date().toISOString()}`); } return this.originalSetItem(key, value); }; @@ - localStorage.removeItem = (key: string) => { + localStorage.removeItem = (key: string) => { if (key.includes('openrouter') || key.includes('ai_chat')) { logger.debug(`=== LOCALSTORAGE REMOVE ===`); logger.debug(`Key: ${key}`); logger.debug(`Timestamp: ${new Date().toISOString()}`); } return this.originalRemoveItem(key); }; }Option B (safer): delete StorageMonitor entirely and its invocation in the constructor.
1450-1479
: Avoid logging key prefixesEven partial API key logging can be sensitive. Drop prefix logging.
- logger.info('Retrieved API key:'); - logger.info('- Exists:', !!apiKey); - logger.info('- Length:', apiKey?.length || 0); - logger.info('- Prefix:', apiKey?.substring(0, 8) + '...' || 'none'); + logger.info('Retrieved API key metadata:'); + logger.info('- Exists:', !!apiKey); + logger.info('- Length:', apiKey?.length || 0);
♻️ Duplicate comments (3)
front_end/panels/ai_chat/core/AgentService.ts (1)
23-23
: Cycle with AIChatPanel appears resolved — niceThe earlier circular import between AgentService and AIChatPanel is gone here. 👍
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (1)
67-69
: Avoid race: ensure LLMClient is ready before any test/tool uses itNon-blocking init can leave LLMClient uninitialized when tools run, causing runtime errors. Await readiness in runSingleTest and runAllTests; add a small guard method.
constructor(options: EvaluationRunnerOptions) { @@ - // Initialize LLM client for tools under evaluation (based on selected provider) - void this.#initializeLLMForEvaluation(); + // Kick off provider init; tests will await readiness via ensure method + void this.#initializeLLMForEvaluation(); @@ async runSingleTest(testCase: TestCase<any>): Promise<TestResult> { + // Ensure LLM is initialized before any tool/LLM access + await this.#ensureLLMReadyForEvaluation(); @@ async runAllTests(testCases: TestCase<any>[]): Promise<TestResult[]> { + // Ensure LLM is initialized before batch execution + await this.#ensureLLMReadyForEvaluation();Add the helper:
+ // Ensure LLMClient is initialized (idempotent) + async #ensureLLMReadyForEvaluation(): Promise<void> { + try { + // getStats() is assumed; if absent, wrap in try/catch and rely on call throwing otherwise + const stats = (LLMClient as any).getInstance().getStats?.(); + if (!stats?.initialized) { + await this.#initializeLLMForEvaluation(); + } + } catch { + await this.#initializeLLMForEvaluation(); + } + }Also applies to: 152-166, 307-323
front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
1357-1364
: Good: propagating mini/nano models to AgentServicePassing miniForInit/nanoForInit ensures tools can pick lightweight variants. Also confirms the earlier AgentService/AIChatPanel import cycle is removed. 👍
🧹 Nitpick comments (22)
front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts (3)
570-573
: Useslice
instead of deprecatedsubstr
.Minor compatibility/readability improvement.
- const spanId = `tool-exec-${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; + const spanId = `tool-exec-${Date.now()}-${Math.random().toString(36).slice(2, 11)}`;
165-167
: Safer error logging.
JSON.stringify(error)
can throw on circular structures. Prefer a defensive serializer.- logger.error('WebSocket error:', typeof error === 'object' ? JSON.stringify(error) : error); + const msg = (error && typeof error === 'object' && 'message' in error) ? (error as any).message : String(error); + logger.error('WebSocket error:', msg);
717-717
: Fix ESLint: trailing spaces.Remove trailing spaces to satisfy @stylistic/no-trailing-spaces.
Run:
pnpm eslint front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts --fix
Also applies to: 723-723
front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts (1)
23-27
: Constructor change is sane; require non-empty model.If
defaultModel
is empty, later calls will fail obscurely. Add a guard.constructor(apiKey: string, defaultModel: string) { this.apiKey = apiKey; - // Model must be provided by caller; no fallbacks - this.defaultModel = defaultModel; + if (!defaultModel?.trim()) throw new Error('LLMEvaluator: defaultModel is required'); + this.defaultModel = defaultModel.trim(); }front_end/panels/ai_chat/ui/EvaluationDialog.ts (4)
203-208
: Config-object constructor usage: good; dedupe option construction.You repeat the same object four times in this file. Extract a small helper to build it once.
+ #buildRunnerOptions() { + return { + judgeModel: this.#state.judgeModel, + mainModel: AIChatPanel.instance().getSelectedModel(), + miniModel: AIChatPanel.getMiniModel(), + nanoModel: AIChatPanel.getNanoModel(), + }; + } ... - this.#evaluationRunner = new EvaluationRunner({ - judgeModel: this.#state.judgeModel, - mainModel: AIChatPanel.instance().getSelectedModel(), - miniModel: AIChatPanel.getMiniModel(), - nanoModel: AIChatPanel.getNanoModel(), - }); + this.#evaluationRunner = new EvaluationRunner(this.#buildRunnerOptions());
214-220
: Same duplication for Vision runner.Reuse the helper and pass
visionEnabled
alongside.- this.#agentEvaluationRunner = new VisionAgentEvaluationRunner({ - visionEnabled: this.#state.visionEnabled, - judgeModel: this.#state.judgeModel, - mainModel: AIChatPanel.instance().getSelectedModel(), - miniModel: AIChatPanel.getMiniModel(), - nanoModel: AIChatPanel.getNanoModel(), - }); + this.#agentEvaluationRunner = new VisionAgentEvaluationRunner({ + visionEnabled: this.#state.visionEnabled, + ...this.#buildRunnerOptions(), + });
900-905
: Dispose previous runners on reinitialize to avoid leaks.Before overwriting, call a
dispose()
/teardown()
if available, or add one.// Reinitialize evaluation runners with new model try { + this.#evaluationRunner?.dispose?.(); this.#evaluationRunner = new EvaluationRunner({ judgeModel: this.#state.judgeModel, mainModel: AIChatPanel.instance().getSelectedModel(), miniModel: AIChatPanel.getMiniModel(), nanoModel: AIChatPanel.getNanoModel(), }); } catch (error) { ... try { + this.#agentEvaluationRunner?.dispose?.(); this.#agentEvaluationRunner = new VisionAgentEvaluationRunner({ visionEnabled: this.#state.visionEnabled, judgeModel: this.#state.judgeModel, mainModel: AIChatPanel.instance().getSelectedModel(), miniModel: AIChatPanel.getMiniModel(), nanoModel: AIChatPanel.getNanoModel(), });Also applies to: 911-917
212-212
: Fix ESLint: trailing spaces.Remove trailing spaces flagged by @stylistic/no-trailing-spaces.
Run:
pnpm eslint front_end/panels/ai_chat/ui/EvaluationDialog.ts --fix
Also applies to: 909-909
front_end/panels/ai_chat/core/AgentService.ts (1)
196-199
: Minor: stop threading unused apiKey into createAgentGraphGraph.createAgentGraph ignores _apiKey; consider removing that param from both sides to reduce confusion.
- this.#graph = createAgentGraph(apiKey, modelName, selectedProvider, miniModel, nanoModel); + this.#graph = createAgentGraph(modelName, selectedProvider, miniModel, nanoModel);(Follow-up in Graph.ts to drop the first parameter.)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (4)
11-13
: Fix import-order and inline type-import lint failuresReorder after Constants and inline type import to satisfy ESLint.
-import { LLMClient } from '../../LLM/LLMClient.js'; -import type { LLMProviderConfig } from '../../LLM/LLMClient.js'; import { TIMING_CONSTANTS } from '../../core/Constants.js'; +import { LLMClient } from '../../LLM/LLMClient.js'; +import type { LLMProviderConfig } from '../../LLM/LLMClient.js';
49-57
: Use property shorthand where possibleMinor lint fix; keep explicit mapping only where names differ.
- this.config = { - extractionModel: evaluationModel, - extractionApiKey: apiKey, - evaluationModel: evaluationModel, - evaluationApiKey: apiKey, - mainModel, - miniModel, - nanoModel, + this.config = { + extractionModel: evaluationModel, + extractionApiKey: apiKey, + evaluationModel, + evaluationApiKey: apiKey, + mainModel, + miniModel, + nanoModel,
85-123
: Tighten types and add braces to satisfy lintReplace any with a union and add curly braces for single-line ifs.
- const provider = (localStorage.getItem('ai_chat_provider') || 'openai') as any; + const provider = (localStorage.getItem('ai_chat_provider') || 'openai') as 'openai'|'openrouter'|'groq'|'litellm'; @@ - if (provider === 'openai') { - const key = localStorage.getItem('ai_chat_api_key') || ''; - if (key) providers.push({ provider: 'openai', apiKey: key }); - } else if (provider === 'openrouter') { - const key = localStorage.getItem('ai_chat_openrouter_api_key') || ''; - if (key) providers.push({ provider: 'openrouter', apiKey: key }); - } else if (provider === 'groq') { - const key = localStorage.getItem('ai_chat_groq_api_key') || ''; - if (key) providers.push({ provider: 'groq', apiKey: key }); - } else if (provider === 'litellm') { - const key = localStorage.getItem('ai_chat_litellm_api_key') || ''; - const endpoint = localStorage.getItem('ai_chat_litellm_endpoint') || ''; - if (endpoint) providers.push({ provider: 'litellm', apiKey: key, providerURL: endpoint }); - } + if (provider === 'openai') { + const key = localStorage.getItem('ai_chat_api_key') || ''; + if (key) { providers.push({ provider: 'openai', apiKey: key }); } + } else if (provider === 'openrouter') { + const key = localStorage.getItem('ai_chat_openrouter_api_key') || ''; + if (key) { providers.push({ provider: 'openrouter', apiKey: key }); } + } else if (provider === 'groq') { + const key = localStorage.getItem('ai_chat_groq_api_key') || ''; + if (key) { providers.push({ provider: 'groq', apiKey: key }); } + } else if (provider === 'litellm') { + const key = localStorage.getItem('ai_chat_litellm_api_key') || ''; + const endpoint = localStorage.getItem('ai_chat_litellm_endpoint') || ''; + if (endpoint) { providers.push({ provider: 'litellm', apiKey: key, providerURL: endpoint }); } + } @@ - if (!providers.length) { - const fallback = localStorage.getItem('ai_chat_api_key') || ''; - if (fallback) providers.push({ provider: 'openai', apiKey: fallback }); - } + if (!providers.length) { + const fallback = localStorage.getItem('ai_chat_api_key') || ''; + if (fallback) { providers.push({ provider: 'openai', apiKey: fallback }); } + }
457-458
: Avoid any: safe global assignmentUse a structural type to attach EvaluationRunner without any.
-(globalThis as any).EvaluationRunner = EvaluationRunner; +(globalThis as unknown as Record<string, unknown>).EvaluationRunner = EvaluationRunner;front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
1019-1042
: Trim trailing whitespace (lint errors)Remove trailing spaces flagged by ESLint in the OAuth success handler block.
- window.addEventListener('openrouter-oauth-success', async () => { + window.addEventListener('openrouter-oauth-success', async () => { @@ - // Auto-fetch OpenRouter models after successful OAuth + // Auto-fetch OpenRouter models after successful OAuth if (apiKey) { try { logger.info('Auto-fetching OpenRouter models after OAuth success...'); await this.#autoFetchOpenRouterModels(apiKey); logger.info('Successfully auto-fetched OpenRouter models'); } catch (error) { logger.warn('Failed to auto-fetch OpenRouter models after OAuth:', error); } }Also applies to: 1030-1038
1545-1582
: ModelOption type consistency across modulesThis block creates type: 'openrouter'. Ensure the shared ModelOption union includes 'groq' and 'openrouter' to avoid TS friction in other files importing that type.
Proposed change (in front_end/panels/ai_chat/LLM/LLMTypes.ts):
-export interface ModelOption { - value: string; - type: 'openai' | 'litellm'; - label?: string; -} +export interface ModelOption { + value: string; + type: 'openai' | 'litellm' | 'groq' | 'openrouter'; + label?: string; +}Search for other narrow unions and align accordingly.
front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts (7)
71-76
: Fix ESLint warnings: use property shorthand and remove trailing space (Line 74).
evaluationModel: evaluationModel,
triggers object-shorthand and trailing-space errors.- evaluationModel: evaluationModel, + evaluationModel,
82-84
: Avoid hardcoded paths for snapshots/reports; make them configurable with sane defaults.Let callers override directories via options to support CI/workspace layouts.
export interface VisionRunnerOptions { visionEnabled?: boolean; judgeModel: string; mainModel: string; miniModel: string; nanoModel: string; + snapshotDir?: string; + reportDir?: string; } ... retries: 2, - snapshotDir: './snapshots/agents', - reportDir: './reports/agents' + snapshotDir: options.snapshotDir ?? './snapshots/agents', + reportDir: options.reportDir ?? './reports/agents'Also applies to: 42-48
174-189
: Honor screenshotDelay in visual verification.
visualVerification.screenshotDelay
is defined but unused. Respect it before taking the after-action screenshot.afterToolExecution: async () => { // Capture after screenshot if vision is enabled if (shouldUseVision && visualConfig?.captureAfterAction) { logger.info('📸 Capturing after screenshot...'); - const afterResult = await this.screenshotTool.execute({ fullPage: false }); + if (visualConfig?.screenshotDelay && visualConfig.screenshotDelay > 0) { + await new Promise(res => setTimeout(res, visualConfig.screenshotDelay)); + } + const afterResult = await this.screenshotTool.execute({ fullPage: false });
228-232
: Guard against undefined validation summary before+=
.If
summary
is undefined,+=
yields “undefined | …”. Normalize before appending.- agentResult.validation.summary += ` | ${shouldUseVision ? 'Vision' : 'LLM'} Score: ${llmJudgment.score}/100`; + const suffix = `${shouldUseVision ? 'Vision' : 'LLM'} Score: ${llmJudgment.score}/100`; + agentResult.validation.summary = [agentResult.validation.summary, suffix].filter(Boolean).join(' | ');
410-421
: Avoid.toLowerCase()
on non-string tool outputs.Tool message content can be objects/arrays. Normalize to string before searching for error keywords.
- // Not JSON, check for error keywords - if (message.content.toLowerCase().includes('error') || - message.content.toLowerCase().includes('failed')) { + // Not JSON, check for error keywords (robust to non-strings) + const contentStr = typeof message.content === 'string' + ? message.content + : JSON.stringify(message.content ?? ''); + if (/error|failed/i.test(contentStr)) { info.errorCount++; }
193-197
: Avoidany
for agent; narrow the type.Casting
agent as any
suppresses useful checks. Prefer a concrete agent/tool interface (e.g.,ConfigurableAgentTool
) or a minimal callable type.- const agentResult = await evaluator.runTest(testCase, agent as any, tracingContext); + const agentResult = await evaluator.runTest(testCase, agent as unknown as /* ConfigurableAgentTool */ any, tracingContext);If
ConfigurableAgentTool
(or equivalent) exists, replaceany
with it.
271-286
: Trim large blobs from trace output (screenshots).
agentResult
may include base64 screenshots; sending them to tracing can bloat storage and slow UIs. Consider excluding or hashing images in trace payloads and storing them separately.- await this.tracingProvider.finalizeTrace(traceId, { - output: agentResult, + const { screenshots, ...lightOutput } = (agentResult.output ?? {}) as any; + await this.tracingProvider.finalizeTrace(traceId, { + output: { ...agentResult, output: lightOutput }, statusMessage: agentResult.status, metadata: {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
front_end/panels/ai_chat/core/AgentService.ts
(2 hunks)front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts
(4 hunks)front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts
(3 hunks)front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
(6 hunks)front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts
(5 hunks)front_end/panels/ai_chat/ui/AIChatPanel.ts
(5 hunks)front_end/panels/ai_chat/ui/EvaluationDialog.ts
(3 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-29T23:08:07.342Z
Learnt from: CR
PR: BrowserOperator/browser-operator-core#0
File: eval-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-08-29T23:08:07.342Z
Learning: Applies to eval-server/nodejs/src/evaluator.js : LLM evaluator must integrate with the OpenAI API to judge agent responses
Applied to files:
front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts
front_end/panels/ai_chat/ui/AIChatPanel.ts
🧬 Code graph analysis (6)
front_end/panels/ai_chat/ui/EvaluationDialog.ts (3)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (1)
EvaluationRunner
(29-454)front_end/panels/ai_chat/ui/AIChatPanel.ts (1)
AIChatPanel
(306-2376)front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts (1)
VisionAgentEvaluationRunner
(50-473)
front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (5)
logger
(919-1012)logger
(1303-1373)logger
(1396-1423)logger
(1530-1543)logger
(1587-1590)
front_end/panels/ai_chat/core/AgentService.ts (1)
front_end/panels/ai_chat/core/Graph.ts (1)
createAgentGraph
(24-42)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (4)
front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts (1)
GenericToolEvaluator
(32-328)front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts (1)
LLMEvaluator
(19-363)front_end/panels/ai_chat/evaluation/framework/types.ts (1)
EvaluationConfig
(154-176)front_end/panels/ai_chat/LLM/LLMClient.ts (2)
LLMProviderConfig
(19-23)LLMClient
(49-385)
front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (6)
apiKey
(1136-1167)apiKey
(1209-1228)apiKey
(1548-1582)provider
(1430-1493)model
(1750-1763)localStorage
(784-807)front_end/panels/ai_chat/LLM/LLMClient.ts (1)
LLMClient
(49-385)
front_end/panels/ai_chat/ui/AIChatPanel.ts (4)
front_end/panels/ai_chat/LLM/LLMClient.ts (1)
LLMClient
(49-385)front_end/panels/ai_chat/ui/SettingsDialog.ts (1)
SettingsDialog
(362-3206)front_end/panels/ai_chat/ui/model_selector/ModelSelector.ts (2)
ModelOption
(10-10)ModelOption
(46-50)front_end/panels/ai_chat/LLM/LLMTypes.ts (1)
ModelOption
(88-92)
🪛 ESLint
front_end/panels/ai_chat/ui/EvaluationDialog.ts
[error] 212-212: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 909-909: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts
[error] 717-717: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 723-723: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts
[error] 74-74: Expected property shorthand.
(object-shorthand)
[error] 74-74: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
[error] 11-11: ../../LLM/LLMClient.js
import should occur after import of ../../core/Constants.js
(import/order)
[error] 12-12: ../../LLM/LLMClient.js
type import should occur after import of ../../core/Constants.js
(import/order)
[error] 12-12: Type imports must be imported in the same import statement as values, using the type keyword
(rulesdir/inline-type-imports)
[error] 52-52: Expected property shorthand.
(object-shorthand)
[error] 88-88: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 93-93: Expected { after 'if' condition.
(curly)
[error] 96-96: Expected { after 'if' condition.
(curly)
[error] 99-99: Expected { after 'if' condition.
(curly)
[error] 103-103: Expected { after 'if' condition.
(curly)
[error] 109-109: Expected { after 'if' condition.
(curly)
[error] 457-457: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts
[error] 136-136: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 142-142: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/ui/AIChatPanel.ts
[error] 1028-1028: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1039-1039: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1551-1551: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1557-1557: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1560-1560: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1563-1563: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1572-1572: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1577-1577: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
🔇 Additional comments (12)
front_end/panels/ai_chat/evaluation/remote/EvaluationAgent.ts (3)
42-46
: Good decoupling: explicit model inputs.Moving judge/mini/nano models into options removes UI coupling. Ensure all constructors across call sites now pass non-empty strings.
62-65
: State stored on the instance looks correct.Private fields for judge/mini/nano are fine and make intent clear.
70-73
: Constructor wiring LGTM.Assigning the explicit models here aligns with the PR goal.
front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts (1)
90-93
: Provider resolution via registry: good direction.This decouples from UI helpers and should be more robust.
front_end/panels/ai_chat/core/AgentService.ts (2)
105-163
: Provider-gated LLM init and crisp errors — goodOnly registering the selected provider and emitting provider-specific missing-credential errors simplifies behavior and UX. LGTM.
168-199
: All AgentService.initialize callers updated
Verified in AIChatPanel.ts (lines 1359–1363) and EvaluationAgent.ts (line 725) that each invocation supplies apiKey, modelName, miniModel, and nanoModel—no further changes needed.front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts (6)
67-70
: Explicit judge model for evaluation/extraction — good.Using a single
judgeModel
for bothextractionModel
andevaluationModel
keeps the judge consistent.
88-88
: Vision flag normalization — LGTM.
Boolean(options.visionEnabled)
is clear and correct.
50-57
: Overall: good consolidation and config propagation.The options-based constructor, unified judge model selection, and tracing hooks are coherent and align with the PR goals.
473-473
: No-op change.EOF brace only; nothing to review.
76-79
: EvaluationConfig includes new model fields
TheEvaluationConfig
interface now definesmainModel
,miniModel
, andnanoModel
(lines 164–166), so no further action is needed.
58-58
: Constructor signature change safe to mergeNo instantiations of VisionAgentEvaluationRunner were found in the codebase, so there are no stale callers to update.
/** | ||
* Resolve the provider for a given model using the LLM registry. | ||
* Falls back to the currently selected provider if unknown. | ||
*/ | ||
async #getProviderForModel(modelId: string): Promise<'openai' | 'litellm' | 'groq' | 'openrouter'> { | ||
try { | ||
const llm = LLMClient.getInstance(); | ||
const models = await llm.getAvailableModels(); | ||
const found = models.find(m => m.id === modelId); | ||
if (found) { | ||
return found.provider as any; | ||
} | ||
} catch { | ||
// ignore and fall through to fallback | ||
} | ||
// Fallback: current selection or openai | ||
return (localStorage.getItem('ai_chat_provider') || 'openai') as any; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Remove any
casts and UI-coupled localStorage fallback.
- ESLint flags the
any
casts. - Falling back to
localStorage('ai_chat_provider')
reintroduces UI coupling. Prefer a neutral default (e.g., 'openai').
- async #getProviderForModel(modelId: string): Promise<'openai' | 'litellm' | 'groq' | 'openrouter'> {
+ type ProviderType = 'openai' | 'litellm' | 'groq' | 'openrouter';
+ async #getProviderForModel(modelId: string): Promise<ProviderType> {
try {
const llm = LLMClient.getInstance();
const models = await llm.getAvailableModels();
const found = models.find(m => m.id === modelId);
if (found) {
- return found.provider as any;
+ return found.provider as ProviderType;
}
} catch {
// ignore and fall through to fallback
}
- // Fallback: current selection or openai
- return (localStorage.getItem('ai_chat_provider') || 'openai') as any;
+ // Fallback: neutral default
+ return 'openai';
}
Remove any
casts and UI-coupled localStorage fallback
Refactor to use a ProviderType
alias and default to 'openai'
:
- async #getProviderForModel(modelId: string): Promise<'openai' | 'litellm' | 'groq' | 'openrouter'> {
+ type ProviderType = 'openai' | 'litellm' | 'groq' | 'openrouter';
+ async #getProviderForModel(modelId: string): Promise<ProviderType> {
try {
const llm = LLMClient.getInstance();
const models = await llm.getAvailableModels();
const found = models.find(m => m.id === modelId);
if (found) {
- return found.provider as any;
+ return found.provider as ProviderType;
}
} catch {
// ignore and fall through to fallback
}
- // Fallback: current selection or openai
- return (localStorage.getItem('ai_chat_provider') || 'openai') as any;
+ // Fallback: neutral default
+ return 'openai';
}
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 ESLint
[error] 136-136: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 142-142: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
🤖 Prompt for AI Agents
In front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts around
lines 126-144, remove the `any` casts and the UI-coupled localStorage fallback:
introduce or import a ProviderType union alias (e.g. type ProviderType =
'openai' | 'litellm' | 'groq' | 'openrouter'), change the method signature to
return Promise<ProviderType>, cast found.provider to ProviderType (not any), and
on error or when not found return the literal 'openai' default instead of
reading localStorage; keep the try/catch but avoid swallowing typed values so
the function always returns a ProviderType.
front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts
Show resolved
Hide resolved
Claude finished @tysonthomas9's task —— View job PR Code ReviewComprehensive review of the model config, OpenRouter OAuth, and model refresh fixes. Review Checklist:
Code Quality & Best Practices✅ Strengths
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (16)
front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts (1)
634-641
: 'inspect_element' and 'get_console_logs' are not registered — runtime failures likelyAction Verification agent declares tools that aren't registered in ToolRegistry above. Import and register them, or remove from the tools list.
-import { NavigateURLTool, PerformActionTool, GetAccessibilityTreeTool, SearchContentTool, NavigateBackTool, NodeIDsToURLsTool, TakeScreenshotTool, ScrollPageTool } from '../../tools/Tools.js'; +import { NavigateURLTool, PerformActionTool, GetAccessibilityTreeTool, SearchContentTool, NavigateBackTool, NodeIDsToURLsTool, TakeScreenshotTool, ScrollPageTool, InspectElementTool, GetConsoleLogsTool } from '../../tools/Tools.js'; ToolRegistry.registerToolFactory('search_content', () => new SearchContentTool()); ToolRegistry.registerToolFactory('take_screenshot', () => new TakeScreenshotTool()); + ToolRegistry.registerToolFactory('inspect_element', () => new InspectElementTool()); + ToolRegistry.registerToolFactory('get_console_logs', () => new GetConsoleLogsTool());If these classes don't exist, replace the tools in createActionVerificationAgentConfig() with registered ones or implement the missing tools.
Also applies to: 95-117, 111-113
front_end/panels/ai_chat/ui/SettingsDialog.ts (1)
1585-1601
: Handleopenrouter-models-updated
in AIChatPanel
No code subscribes to the CustomEvent you dispatch in SettingsDialog, so openrouter model updates are never applied. In AIChatPanel (e.g. its constructor), addwindow.addEventListener('openrouter-models-updated', (event: CustomEvent<{models: ModelOption[]}>) => { AIChatPanel.updateModelOptions(event.detail.models, false); AIChatPanel.instance().performUpdate(); });front_end/panels/ai_chat/agent_framework/AgentRunner.ts (2)
314-328
: Guard provider presence in handoffs; avoid unsafe cast.Prevent undefined provider slipping through by validating before constructing the config.
- const targetRunnerConfig: AgentRunnerConfig = { + if (!defaultProvider) { + throw new Error(`Provider not provided for handoff to '${targetAgentName}'.`); + } + const targetRunnerConfig: AgentRunnerConfig = { apiKey, modelName: resolvedModelName, systemPrompt: enhancedSystemPrompt, tools: targetConfig.tools .map(toolName => ToolRegistry.getRegisteredTool(toolName)) .filter((tool): tool is Tool<any, any> => tool !== null), maxIterations: targetConfig.maxIterations || defaultMaxIterations, temperature: targetConfig.temperature ?? defaultTemperature, - provider: defaultProvider as LLMProvider, + provider: defaultProvider, getVisionCapability: defaultGetVisionCapability, miniModel, nanoModel, };
977-989
: Remove any-typed LLM context and args; type the tool call.Eliminates @typescript-eslint/no-explicit-any at 979 and improves tool context typing.
- logger.info(`${agentName} Executing tool: ${toolToExecute.name}`); + logger.info(`${agentName} Executing tool: ${toolToExecute.name}`); const execTracingContext = getCurrentTracingContext(); - toolResultData = await toolToExecute.execute(toolArgs as any, ({ - provider: config.provider, - model: modelName, - miniModel: config.miniModel, - nanoModel: config.nanoModel, - getVisionCapability: config.getVisionCapability, - overrideSessionId: preallocatedChildId, - overrideParentSessionId: currentSession.sessionId, - overrideTraceId: execTracingContext?.traceId, - } as any)); + const llmCtx: LLMContext = { + provider: config.provider, + model: modelName, + miniModel: config.miniModel, + nanoModel: config.nanoModel, + getVisionCapability: config.getVisionCapability, + overrideSessionId: preallocatedChildId, + overrideParentSessionId: currentSession.sessionId, + overrideTraceId: execTracingContext?.traceId, + }; + toolResultData = await toolToExecute.execute( + toolArgs as Record<string, unknown>, + llmCtx + );Add at top with other imports:
+import type { LLMContext } from '../tools/Tools.js';
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (2)
498-511
: Remove any-typed ctx; destructure overrides directly.Clears @typescript-eslint/no-explicit-any and simplifies.
- const ctx: any = callCtx || {}; + const { overrideSessionId, overrideParentSessionId, overrideTraceId } = callCtx; const result = await AgentRunner.run( internalMessages, args, runnerConfig, runnerHooks, this, // executingAgent undefined, { - sessionId: ctx.overrideSessionId, - parentSessionId: ctx.overrideParentSessionId, - traceId: ctx.overrideTraceId, + sessionId: overrideSessionId, + parentSessionId: overrideParentSessionId, + traceId: overrideTraceId, } );
197-202
: Fix logging string interpolation.Backticks required; current code logs the literal ${name}.
- logger.info('Registered and instantiated tool: ${name}'); + logger.info(`Registered and instantiated tool: ${name}`);front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (1)
465-467
: Avoid any on global assignment; declare Window typing.Removes no-explicit-any at 466.
-// Export for easy access in DevTools console -(globalThis as any).EvaluationRunner = EvaluationRunner; +// Export for easy access in DevTools console +declare global { + interface Window { EvaluationRunner: typeof EvaluationRunner } +} +(globalThis as unknown as Window).EvaluationRunner = EvaluationRunner;front_end/panels/ai_chat/ui/AIChatPanel.ts (4)
448-479
: Provider-scoped fallback returns OpenAI models for other providers (logic bug).When provider is 'groq' or 'openrouter' with no cached models, getModelOptions(provider) incorrectly falls back to DEFAULT_OPENAI_MODELS, causing cross-provider validation drift. Return [] when a provider filter is supplied and nothing is cached.
Apply:
static getModelOptions(provider?: 'openai' | 'litellm' | 'groq' | 'openrouter'): ModelOption[] { @@ - // If nothing is found, return default OpenAI models - return provider === 'litellm' ? [] : DEFAULT_OPENAI_MODELS; + // If nothing is found: + // - when provider is specified, return empty list for that provider + // - when provider is not specified, return default OpenAI models + return provider ? [] : DEFAULT_OPENAI_MODELS; }
49-77
: Do not override localStorage in production and never log secret previews.Overwriting localStorage methods globally and logging value previews risks credential leakage (API keys) and side effects. Gate this behind a debug flag and redact values.
Minimal hardening:
private setupStorageMonitoring(): void { - // Monitor setItem operations - localStorage.setItem = (key: string, value: string) => { + const debugEnabled = localStorage.getItem('ai_chat_debug_logging') === 'true'; + // Monitor setItem operations + localStorage.setItem = (key: string, value: string) => { if (key.includes('openrouter') || key.includes('ai_chat')) { - logger.debug(`=== LOCALSTORAGE SET ===`); - logger.debug(`Key: ${key}`); - logger.debug(`Value exists: ${!!value}`); - logger.debug(`Value length: ${value?.length || 0}`); - logger.debug(`Value preview: ${value?.substring(0, 50) + (value?.length > 50 ? '...' : '') || 'null'}`); - logger.debug(`Timestamp: ${new Date().toISOString()}`); + if (debugEnabled) { + logger.debug(`=== LOCALSTORAGE SET ===`); + logger.debug(`Key: ${key}`); + logger.debug(`Value exists: ${!!value}`); + logger.debug(`Value length: ${value?.length || 0}`); + logger.debug(`Value preview: [redacted]`); + logger.debug(`Timestamp: ${new Date().toISOString()}`); + } } return this.originalSetItem(key, value); }; @@ - // Monitor removeItem operations - localStorage.removeItem = (key: string) => { + // Monitor removeItem operations + localStorage.removeItem = (key: string) => { if (key.includes('openrouter') || key.includes('ai_chat')) { - logger.debug(`=== LOCALSTORAGE REMOVE ===`); - logger.debug(`Key: ${key}`); - logger.debug(`Timestamp: ${new Date().toISOString()}`); + if (debugEnabled) { + logger.debug(`=== LOCALSTORAGE REMOVE ===`); + logger.debug(`Key: ${key}`); + logger.debug(`Timestamp: ${new Date().toISOString()}`); + } } return this.originalRemoveItem(key); }; }
1019-1043
: On OAuth success, align provider selection with OpenRouter.If OAuth succeeds but PROVIDER_SELECTION_KEY remains on another provider, the fetched OpenRouter models won’t be active. Set provider to 'openrouter' when an OpenRouter API key is present.
- window.addEventListener('openrouter-oauth-success', async () => { + window.addEventListener('openrouter-oauth-success', async () => { @@ - if (apiKey) { + if (apiKey) { + // Ensure provider is set to OpenRouter + localStorage.setItem(PROVIDER_SELECTION_KEY, 'openrouter'); try {
900-906
: Type consistency: ModelOption unions differ across files.This file defines ModelOption.type as 'openai' | 'litellm' | 'groq' | 'openrouter', but LLMTypes.ModelOption (per snippet) only includes 'openai' | 'litellm'. Unify to avoid type errors and runtime mismatch.
Proposed update in front_end/panels/ai_chat/LLM/LLMTypes.ts:
export interface ModelOption { value: string; - type: 'openai' | 'litellm'; - label?: string; + type: 'openai' | 'litellm' | 'groq' | 'openrouter'; + label?: string; }front_end/panels/ai_chat/tools/Tools.ts (5)
1120-1156
: Escape selector to prevent code injection in evaluate().Interpolate selector via JSON.stringify instead of raw string.
- expression: `(() => { - const element = document.querySelector("${selector}"); + expression: `(() => { + const sel = ${JSON.stringify(selector)}; + const element = document.querySelector(sel); if (!element) { return { success: false, - message: "Element not found with selector: ${selector}" + message: "Element not found with selector: " + sel }; }
1196-1204
: Escape query string in SearchContentTool.Avoid embedding unescaped text; use JSON.stringify.
- expression: `(() => { - const query = "${query}"; + expression: `(() => { + const query = ${JSON.stringify(query)}; const limit = ${limit};Also applies to: 1268-1291
1341-1367
: Escape direction in ScrollPageTool.Same injection risk for direction.
- const direction = "${direction}"; + const direction = ${JSON.stringify(direction)};
1656-1671
: Guard against undefined iframes.treeResult.iframes may be undefined; .map will throw.
- const enhancedIframes = treeResult.iframes.map(iframe => ({ + const enhancedIframes = (treeResult.iframes ?? []).map(iframe => ({
4163-4215
: New tools are not exposed by getTools().ObjectiveDrivenActionTool and SchemaBasedDataExtractionTool are defined but not returned, making them unusable.
Add them:
export function getTools(): Array<( ... )> { return [ new ExecuteJavaScriptTool(), @@ new WaitTool(), - new SequentialThinkingTool() + new SequentialThinkingTool(), + new ObjectiveDrivenActionTool(), + new SchemaBasedDataExtractionTool() ]; }
♻️ Duplicate comments (5)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (2)
243-248
: Good: enforce provider/miniModel; switch to miniModel.This addresses the earlier concern to validate context and standardize model selection.
Run to find call sites that pass ctx without miniModel (heuristic):
#!/bin/bash set -euo pipefail echo "Call sites for StreamlinedSchemaExtractorTool:" rg -nP --type=ts --type=tsx -C2 'new\s+StreamlinedSchemaExtractorTool\b|\bextract_schema_streamlined\b' || true echo echo "Potential ctx without miniModel:" rg -nP --type=ts --type=tsx -C2 '\bexecute\s*\([^,]*,\s*{[^}]*\bprovider\s*:\s*[^}]*}\s*\)' | rg -v 'miniModel' || true
394-399
: Good: enforce provider/miniModel in URL-retry path and use miniModel.Matches prior guidance and keeps both call sites consistent.
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (1)
430-434
: Good: CallCtx adoption replaces inline import() types.This addresses prior ESLint complaints and clarifies the execution context.
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (2)
153-157
: LLM init race handled (awaiting #llmInitPromise).Prevents tools from touching LLMClient before init completes.
312-316
: Batch path also correctly awaits LLM init.Race fixed for runAllTests too.
🧹 Nitpick comments (32)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (4)
13-13
: Fix import order to satisfy ESLint (import/order).Place LLMResponseParser import before LLMTracingWrapper as flagged by the linter.
-import { callLLMWithTracing } from './LLMTracingWrapper.js'; -import type { Tool, LLMContext } from './Tools.js'; -import { LLMResponseParser } from '../LLM/LLMResponseParser.js'; +import { LLMResponseParser } from '../LLM/LLMResponseParser.js'; +import { callLLMWithTracing } from './LLMTracingWrapper.js'; +import type { Tool, LLMContext } from './Tools.js';
272-283
: Robust JSON parsing added; fixany
typing and consider schema validation.Parsing is solid (strict → fallbacks). Replace
any
to satisfy ESLint and prevent unsafe usage; optionally validate against the provided JSON Schema before returning.- let parsed: any; + let parsed: unknown; try { parsed = LLMResponseParser.parseStrictJSON(text); } catch { parsed = LLMResponseParser.parseJSONWithFallbacks(text); } logger.debug(`JSON extraction successful on attempt ${attempt}`); - return parsed; + return parsed;Optional (outside this hunk): validate
parsed
with Ajv againstschema
and throw on validation errors to fail fast. I can provide the snippet on request.
280-280
: Trim trailing whitespace (ESLint: no-trailing-spaces).- +
422-432
: Don’t abort URL retries on first non-JSON response.Returning null here causes the caller to break early; consider allowing subsequent retry attempts even if one parse fails.
If you prefer minimal change, keep returning
null
here and adjust the loop to continue:- } else { - logger.warn(`URL retry attempt ${attempt} failed`); - break; - } + } else { + logger.warn(`URL retry attempt ${attempt} produced non-JSON; continuing retries...`); + continue; + }(Apply in resolveUrlsWithRetry at lines 185–188.)
front_end/panels/ai_chat/core/Constants.ts (1)
39-45
: Add storage keys for OpenRouter cache to avoid magic strings elsewhereSettingsDialog writes to 'openrouter_models_cache' and 'openrouter_models_cache_timestamp'. Consider adding these to STORAGE_KEYS here and consuming them in the UI code to keep keys DRY.
export const STORAGE_KEYS = { CUSTOM_PROMPTS: 'ai_chat_custom_prompts', LITELLM_MODELS: 'litellm_custom_models', API_KEY: 'ai_chat_api_key', LITELLM_ENDPOINT: 'litellm_endpoint', SELECTED_MODEL: 'selected_model', + OPENROUTER_MODELS_CACHE: 'openrouter_models_cache', + OPENROUTER_MODELS_CACHE_TS: 'openrouter_models_cache_timestamp', } as const;Also applies to: 90-95
front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts (1)
5-21
: Fix import order (lint: import/order)Move Constants import above other relative imports to satisfy the linter.
-import { FetcherTool } from '../../tools/FetcherTool.js'; +import { MODEL_SENTINELS } from '../../core/Constants.js'; +import { FetcherTool } from '../../tools/FetcherTool.js'; import { FinalizeWithCritiqueTool } from '../../tools/FinalizeWithCritiqueTool.js'; ... -import { WaitTool } from '../../tools/Tools.js'; -import { MODEL_SENTINELS } from '../../core/Constants.js'; +import { WaitTool } from '../../tools/Tools.js';front_end/panels/ai_chat/ui/SettingsDialog.ts (5)
32-34
: Co-locate cache key constantsOPENROUTER_MODELS_CACHE_DURATION_MS here is fine; also consider sourcing cache keys from core/Constants.STORAGE_KEYS to avoid string duplication.
552-565
: Keep OpenRouter cache and timestamp in sync across all fetch pathsGood: on provider change you persist both cache and timestamp. Ensure the button-driven fetch path below also persists the models list to the cache to avoid stale cache users.
1755-1760
: Add explicit return type to satisfy lint ruleType the async function.
-async function updateOpenRouterModelSelectors() { +async function updateOpenRouterModelSelectors(): Promise<void> {
1829-1836
: Also persist fetched models into the OpenRouter cache for consistencyCurrently only the timestamp is updated here; write the models too (mirrors provider-change path).
-// Update timestamp for cache management -localStorage.setItem('openrouter_models_cache_timestamp', Date.now().toString()); +// Update cache and timestamp for cache management +localStorage.setItem('openrouter_models_cache', JSON.stringify(modelOptions)); +localStorage.setItem('openrouter_models_cache_timestamp', Date.now().toString());
1697-1747
: Remove trailing spaces and satisfy ESLintMultiple trailing-space violations flagged. Please run the formatter/linter (e.g., pnpm lint --fix) to clean these.
Also applies to: 1831-1833, 3185-3203
front_end/panels/ai_chat/tools/DocumentSearchTool.ts (1)
7-7
: Reorder type import before value import (lint: import/order)Place the type-only Tools import above VectorDBClient import.
-import { VectorDBClient, type VectorSearchResult } from './VectorDBClient.js'; -import type { Tool, LLMContext } from './Tools.js'; +import type { Tool, LLMContext } from './Tools.js'; +import { VectorDBClient, type VectorSearchResult } from './VectorDBClient.js';front_end/panels/ai_chat/tools/FinalizeWithCritiqueTool.ts (1)
9-13
: Move imports to the top (lint: import/first) and group type importAll imports should precede executable code.
-const logger = createLogger('FinalizeWithCritiqueTool'); - -import { CritiqueTool} from './CritiqueTool.js'; -import type { Tool, LLMContext } from './Tools.js'; +import { CritiqueTool } from './CritiqueTool.js'; +import type { Tool, LLMContext } from './Tools.js'; + +const logger = createLogger('FinalizeWithCritiqueTool');front_end/panels/ai_chat/tools/BookmarkStoreTool.ts (2)
10-10
: Fix import order to satisfy ESLint (import/order).Place the non-relative/type import groups consistently and move the type-only Tools import before VectorDBClient. Also add a blank line between groups.
-import { VectorDBClient, type VectorDocument, type VectorStoreResponse } from './VectorDBClient.js'; -import type { Tool, LLMContext } from './Tools.js'; +import type { Tool, LLMContext } from './Tools.js'; +import { VectorDBClient, type VectorDocument, type VectorStoreResponse } from './VectorDBClient.js';
62-65
: includeFullContent is declared but unused.Either plumb it into the extraction prompt (e.g., summary vs full content) or remove from the schema to avoid confusing API users.
- const markdownResult = await this.htmlToMarkdownTool.execute({ - instruction: `Extract the main content from this page for bookmarking. Focus on the primary article or content that would be useful for later reference.`, + const markdownResult = await this.htmlToMarkdownTool.execute({ + instruction: `Extract ${args.includeFullContent === false ? 'a concise summary of' : 'the main content from'} this page for bookmarking. Focus on the primary article or content that would be useful for later reference.`,front_end/panels/ai_chat/ui/EvaluationDialog.ts (3)
19-19
: Resolve import order lint error.Move Constants import before EvaluationRunner per ESLint import/order.
-import { EvaluationRunner } from '../evaluation/runner/EvaluationRunner.js'; -import { VisionAgentEvaluationRunner } from '../evaluation/runner/VisionAgentEvaluationRunner.js'; -… -import { AIChatPanel } from './AIChatPanel.js'; -import { MODEL_PLACEHOLDERS } from '../core/Constants.js'; +import { MODEL_PLACEHOLDERS } from '../core/Constants.js'; +import { EvaluationRunner } from '../evaluation/runner/EvaluationRunner.js'; +import { VisionAgentEvaluationRunner } from '../evaluation/runner/VisionAgentEvaluationRunner.js'; +… +import { AIChatPanel } from './AIChatPanel.js';
847-856
: Simplify provider filtering using API and keep placeholder check.Use AIChatPanel.getModelOptions(currentProvider) to avoid depending on option.type semantics.
- // Get all model options but only show models from the same provider as the selected provider - const modelOptions = AIChatPanel.getModelOptions(); - - // Filter models to only show those from the selected provider - const filteredModels = modelOptions.filter(option => { - if (option.value.startsWith(MODEL_PLACEHOLDERS.NO_MODELS) || option.value.startsWith(MODEL_PLACEHOLDERS.ADD_CUSTOM)) { - return false; // Skip placeholder options - } - // Use the model's type to determine if it belongs to the selected provider - return option.type === currentProvider; - }); + // Get models for the current provider and filter out placeholders + const providerModels = AIChatPanel.getModelOptions(currentProvider); + const filteredModels = providerModels.filter(option => + !option.value.startsWith(MODEL_PLACEHOLDERS.NO_MODELS) && + !option.value.startsWith(MODEL_PLACEHOLDERS.ADD_CUSTOM) + );Also applies to: 851-856
213-213
: Trim trailing spaces flagged by ESLint.Minor formatting fixes to keep CI green.
Also applies to: 910-910
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (3)
12-12
: Fix import order (non-type imports before type-only).Place LLMResponseParser before the type-only Tools import.
-import type { LLMContext } from './Tools.js'; -import { callLLMWithTracing } from './LLMTracingWrapper.js'; -import { LLMResponseParser } from '../LLM/LLMResponseParser.js'; +import { LLMResponseParser } from '../LLM/LLMResponseParser.js'; +import { callLLMWithTracing } from './LLMTracingWrapper.js'; +import type { LLMContext } from './Tools.js';Also applies to: 10-12
697-707
: Replaceany
with safe typing and a type guard for metadata.Removes ESLint warning and avoids unsafe property access.
- let parsedMetadata: any = null; + let parsedMetadata: unknown = null; @@ - // Basic validation - if (typeof parsedMetadata?.progress === 'string' && typeof parsedMetadata?.completed === 'boolean') { - return parsedMetadata as ExtractionMetadata; - } + // Basic validation + const isExtractionMetadata = (v: unknown): v is ExtractionMetadata => { + if (!v || typeof v !== 'object') return false; + const o = v as Record<string, unknown>; + return typeof o.progress === 'string' && typeof o.completed === 'boolean'; + }; + if (isExtractionMetadata(parsedMetadata)) { + return parsedMetadata; + }If preferred, extract the type guard outside the method once for reuse.
Also applies to: 709-714
798-848
: Method name mismatch: resolveUrlsWithLLM is now programmatic.Rename to reflect no-LLM path to reduce confusion for readers and telemetry.
- private async resolveUrlsWithLLM(options: { + private async resolveUrlsProgrammatically(options: { @@ - const updatedData = this.findAndReplaceNodeIds(data, nodeIdToUrlMap); + const updatedData = this.findAndReplaceNodeIds(data, nodeIdToUrlMap);Note: update call sites accordingly.
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)
174-186
: Use safe hasOwnProperty to avoid prototype pitfalls.Prefer Object.prototype.hasOwnProperty.call.
- fieldsToRemove.forEach(field => { - if (sanitized.hasOwnProperty(field)) { - delete sanitized[field]; - } - }); + for (const field of fieldsToRemove) { + if (Object.prototype.hasOwnProperty.call(sanitized, field)) { + delete (sanitized as Record<string, unknown>)[field]; + } + }front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (1)
46-55
: Use object property shorthand for evaluationModel.Trims one ESLint object-shorthand warning.
- const evaluationModel = judgeModel; + const evaluationModel = judgeModel; ... - evaluationModel: evaluationModel, + evaluationModel,front_end/panels/ai_chat/ui/AIChatPanel.ts (3)
1546-1584
: Remove redundant dynamic imports; use already-imported modules.LLMClient and SettingsDialog are imported at top; dynamic imports here add bundle churn without benefit.
async #autoFetchOpenRouterModels(apiKey: string): Promise<void> { try { logger.debug('Fetching OpenRouter models automatically after OAuth...'); - // Import LLMClient and SettingsDialog dynamically to fetch and update models - const [{ LLMClient }, { SettingsDialog }] = await Promise.all([ - import('../LLM/LLMClient.js'), - import('./SettingsDialog.js') - ]); - const openrouterModels = await LLMClient.fetchOpenRouterModels(apiKey); logger.debug(`Auto-fetched ${openrouterModels.length} OpenRouter models`); // Update models programmatically via SettingsDialog static method SettingsDialog.updateOpenRouterModels(openrouterModels);
86-86
: Move Constants import to top to satisfy import/first and import/order.ESLint flagged this; place it with other imports.
- import { MODEL_PLACEHOLDERS } from '../core/Constants.js'; +// near other imports at the top of the file +import { MODEL_PLACEHOLDERS } from '../core/Constants.js';
1029-1040
: Fix trailing spaces to appease ESLint.Multiple lines flagged for trailing whitespace.
Also applies to: 1254-1256, 1552-1552, 1558-1564, 1573-1578
front_end/panels/ai_chat/tools/Tools.ts (6)
660-669
: Template string bug in log.Using single quotes with ${url} prints the literal string.
- logger.info('Initiating navigation to: ${url}'); + logger.info(`Initiating navigation to: ${url}`);
995-1035
: Minor: Mismatched helper call arg.getSimplifiedHTML is declared without params but called with rawHtml; drop the arg.
- html: getSimplifiedHTML(rawHtml), + html: getSimplifiedHTML(),
891-969
: Logging severity and messaging in NavigateBackTool.Normal control flow logs use logger.error; downgrade to info/warn and make wait loop quieter.
- logger.error('navigate_back', args); + logger.info('navigate_back', args); @@ - // Only use supported console methods - logger.error('Navigation completed, document ready state is complete'); + logger.info('Navigation completed, document ready state is complete'); @@ - logger.error('Still waiting for navigation to complete...'); + logger.debug('Still waiting for navigation to complete...'); @@ - logger.error('Navigation timed out after waiting for document ready state'); + logger.warn('Navigation timed out after waiting for document ready state');
374-381
: Normalize error handling for unknown type.Several catch blocks use error.message on unknown. Prefer a helper or inline guard.
Example pattern:
- } catch (error) { - return { error: `Failed to execute JavaScript: ${error.message}` }; + } catch (error) { + const msg = error instanceof Error ? error.message : String(error); + return { error: `Failed to execute JavaScript: ${msg}` }; }Apply similarly where applicable.
Also applies to: 472-475, 759-762, 983-985, 1090-1092, 1156-1160, 1296-1299, 1382-1385, 1492-1496, 2327-2333, 2470-2475, 3133-3189, 4031-4031, 4064-4066, 4133-4134
701-752
: 404 confirmation path requires ctx.nanoModel; OK, but fall back to ctx.model when nano absent?Right now missing nanoModel short-circuits; consider falling back to ctx.model to increase robustness.
1853-1860
: Post-action verification: contentEditable detection is ineffective.isContentEditableElement is never set to true with current suitability payload. Simplify by trimming actualValue when it’s a string.
- const comparisonValue = isContentEditableElement ? actualValue?.trim() : actualValue; + const comparisonValue = typeof actualValue === 'string' ? actualValue.trim() : actualValue;Also applies to: 2007-2014
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts
(8 hunks)front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts
(3 hunks)front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts
(11 hunks)front_end/panels/ai_chat/core/Constants.ts
(1 hunks)front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
(8 hunks)front_end/panels/ai_chat/tools/BookmarkStoreTool.ts
(4 hunks)front_end/panels/ai_chat/tools/CritiqueTool.ts
(6 hunks)front_end/panels/ai_chat/tools/DocumentSearchTool.ts
(3 hunks)front_end/panels/ai_chat/tools/FinalizeWithCritiqueTool.ts
(3 hunks)front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts
(5 hunks)front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts
(5 hunks)front_end/panels/ai_chat/tools/Tools.ts
(16 hunks)front_end/panels/ai_chat/ui/AIChatPanel.ts
(10 hunks)front_end/panels/ai_chat/ui/EvaluationDialog.ts
(5 hunks)front_end/panels/ai_chat/ui/SettingsDialog.ts
(8 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- front_end/panels/ai_chat/tools/CritiqueTool.ts
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-08-29T23:08:07.342Z
Learnt from: CR
PR: BrowserOperator/browser-operator-core#0
File: eval-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-08-29T23:08:07.342Z
Learning: Applies to eval-server/nodejs/src/config.js : Require OPENAI_API_KEY and PORT (default 8080) from environment configuration
Applied to files:
front_end/panels/ai_chat/tools/DocumentSearchTool.ts
front_end/panels/ai_chat/tools/BookmarkStoreTool.ts
📚 Learning: 2025-08-29T23:08:07.342Z
Learnt from: CR
PR: BrowserOperator/browser-operator-core#0
File: eval-server/nodejs/CLAUDE.md:0-0
Timestamp: 2025-08-29T23:08:07.342Z
Learning: Applies to eval-server/nodejs/src/evaluator.js : LLM evaluator must integrate with the OpenAI API to judge agent responses
Applied to files:
front_end/panels/ai_chat/ui/AIChatPanel.ts
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
🧬 Code graph analysis (13)
front_end/panels/ai_chat/tools/DocumentSearchTool.ts (1)
front_end/panels/ai_chat/tools/Tools.ts (1)
LLMContext
(54-60)
front_end/panels/ai_chat/tools/FinalizeWithCritiqueTool.ts (1)
front_end/panels/ai_chat/tools/Tools.ts (1)
LLMContext
(54-60)
front_end/panels/ai_chat/ui/AIChatPanel.ts (4)
front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_PLACEHOLDERS
(91-94)front_end/panels/ai_chat/LLM/LLMClient.ts (1)
LLMClient
(49-385)front_end/panels/ai_chat/ui/SettingsDialog.ts (1)
SettingsDialog
(362-3208)front_end/panels/ai_chat/LLM/LLMTypes.ts (1)
ModelOption
(88-92)
front_end/panels/ai_chat/tools/BookmarkStoreTool.ts (1)
front_end/panels/ai_chat/tools/Tools.ts (1)
LLMContext
(54-60)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (1)
front_end/panels/ai_chat/LLM/LLMResponseParser.ts (1)
LLMResponseParser
(13-306)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (4)
front_end/panels/ai_chat/evaluation/framework/GenericToolEvaluator.ts (1)
GenericToolEvaluator
(32-328)front_end/panels/ai_chat/evaluation/framework/judges/LLMEvaluator.ts (1)
LLMEvaluator
(19-363)front_end/panels/ai_chat/evaluation/framework/types.ts (1)
EvaluationConfig
(154-176)front_end/panels/ai_chat/LLM/LLMClient.ts (2)
LLMProviderConfig
(19-23)LLMClient
(49-385)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (4)
front_end/panels/ai_chat/ui/AIChatPanel.ts (5)
logger
(920-1013)logger
(1304-1374)logger
(1397-1424)logger
(1531-1544)logger
(1588-1591)front_end/panels/ai_chat/LLM/LLMTypes.ts (1)
LLMProvider
(142-142)front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_SENTINELS
(85-88)front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)
AgentRunnerConfig
(26-41)
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (1)
front_end/panels/ai_chat/LLM/LLMResponseParser.ts (1)
LLMResponseParser
(13-306)
front_end/panels/ai_chat/ui/EvaluationDialog.ts (4)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts (1)
EvaluationRunner
(29-463)front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
AIChatPanel
(307-2377)error
(1901-1914)front_end/panels/ai_chat/evaluation/runner/VisionAgentEvaluationRunner.ts (1)
VisionAgentEvaluationRunner
(50-473)front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_PLACEHOLDERS
(91-94)
front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts (1)
front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_SENTINELS
(85-88)
front_end/panels/ai_chat/tools/Tools.ts (1)
front_end/panels/ai_chat/ui/AIChatPanel.ts (2)
provider
(1431-1494)model
(1751-1764)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (3)
front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_SENTINELS
(85-88)front_end/panels/ai_chat/ui/AIChatPanel.ts (5)
logger
(920-1013)logger
(1304-1374)logger
(1397-1424)logger
(1531-1544)logger
(1588-1591)front_end/panels/ai_chat/tracing/TracingConfig.ts (1)
getCurrentTracingContext
(299-301)
front_end/panels/ai_chat/ui/SettingsDialog.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (6)
localStorage
(785-808)error
(1901-1914)ModelOption
(92-96)model
(1751-1764)updateModelOptions
(487-606)window
(1018-1065)front_end/panels/ai_chat/LLM/LLMClient.ts (1)
LLMClient
(49-385)
🪛 ESLint
front_end/panels/ai_chat/tools/DocumentSearchTool.ts
[error] 7-7: ./Tools.js
type import should occur before import of ./VectorDBClient.js
(import/order)
front_end/panels/ai_chat/tools/FinalizeWithCritiqueTool.ts
[error] 12-12: Import in body of module; reorder to top.
(import/first)
front_end/panels/ai_chat/ui/AIChatPanel.ts
[error] 86-86: Import in body of module; reorder to top.
(import/first)
[error] 86-86: ../core/Constants.js
import should occur before import of ../LLM/LLMClient.js
(import/order)
[error] 1029-1029: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1040-1040: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1254-1254: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1552-1552: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1558-1558: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1561-1561: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1564-1564: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1573-1573: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1578-1578: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/tools/BookmarkStoreTool.ts
[error] 10-10: There should be at least one empty line between import groups
(import/order)
[error] 10-10: ./Tools.js
type import should occur before import of ./VectorDBClient.js
(import/order)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts
[error] 13-13: ../LLM/LLMResponseParser.js
import should occur before import of ./LLMTracingWrapper.js
(import/order)
[error] 274-274: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 280-280: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/evaluation/runner/EvaluationRunner.ts
[error] 11-11: ../../LLM/LLMClient.js
import should occur after import of ../../core/Constants.js
(import/order)
[error] 12-12: ../../LLM/LLMClient.js
type import should occur after import of ../../core/Constants.js
(import/order)
[error] 12-12: Type imports must be imported in the same import statement as values, using the type keyword
(rulesdir/inline-type-imports)
[error] 53-53: Expected property shorthand.
(object-shorthand)
[error] 89-89: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 94-94: Expected { after 'if' condition.
(curly)
[error] 97-97: Expected { after 'if' condition.
(curly)
[error] 100-100: Expected { after 'if' condition.
(curly)
[error] 104-104: Expected { after 'if' condition.
(curly)
[error] 110-110: Expected { after 'if' condition.
(curly)
[error] 466-466: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts
[error] 10-10: There should be at least one empty line between import groups
(import/order)
[error] 10-10: ../core/Constants.js
import should occur before type import of ../tools/Tools.js
(import/order)
[error] 11-11: There should be at least one empty line between import groups
(import/order)
[error] 12-12: ../LLM/LLMTypes.js
type import should occur before type import of ../tools/Tools.js
(import/order)
[error] 16-16: Import in body of module; reorder to top.
(import/first)
[error] 16-16: ./AgentRunner.js
import should occur before type import of ./AgentSessionTypes.js
(import/order)
[error] 20-20: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 21-21: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 22-22: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 23-23: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 24-24: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 25-25: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 26-26: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 27-27: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 28-28: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 429-429: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 432-432: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 457-457: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 462-462: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 467-467: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 471-471: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 498-498: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts
[error] 12-12: ../LLM/LLMResponseParser.js
import should occur before type import of ./Tools.js
(import/order)
[error] 697-697: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/ui/EvaluationDialog.ts
[error] 19-19: ../core/Constants.js
import should occur before import of ../evaluation/runner/EvaluationRunner.js
(import/order)
[error] 213-213: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 910-910: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts
[error] 19-19: ../../core/Constants.js
import should occur before import of ../../tools/FetcherTool.js
(import/order)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts
[error] 21-21: Import in body of module; reorder to top.
(import/first)
[error] 21-21: ../core/Constants.js
import should occur before import of ../core/PageInfoManager.js
(import/order)
[error] 979-979: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/ui/SettingsDialog.ts
[error] 1697-1697: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1704-1704: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1707-1707: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1725-1725: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1730-1730: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1733-1733: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1740-1740: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1744-1744: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1747-1747: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1755-1755: Missing return type on function.
(@typescript-eslint/explicit-function-return-type)
[error] 1757-1757: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1760-1760: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1831-1831: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 1833-1833: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3182-3182: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 3185-3185: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3192-3192: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3196-3196: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3201-3201: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 3203-3203: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
🔇 Additional comments (20)
front_end/panels/ai_chat/core/Constants.ts (1)
84-89
: Good centralization of model sentinels and UI placeholdersShared constants reduce string drift across agents and UI. LGTM.
Also applies to: 90-95
front_end/panels/ai_chat/agent_framework/implementation/ConfiguredAgents.ts (2)
26-90
: Confirm omission of modelName for direct_url_navigator_agent is intentionalOther agents set a model; this one doesn't. Verify it uses the desired default and not a stale or null model.
323-326
: Sentinel resolution already implemented; no changes required
ConfigurableAgentTool maps MODEL_SENTINELS.USE_MINI/USE_NANO to callCtx.miniModel/callCtx.nanoModel (ConfigurableAgentTool.ts:435–444), and AgentRunner likewise resolves sentinels before handing off to agents (AgentRunner.ts:299–309).front_end/panels/ai_chat/ui/SettingsDialog.ts (1)
1692-1719
: Auto-refresh logic looks solidTTL check + silent refresh before building selectors is a good UX improvement.
Also applies to: 1721-1752
front_end/panels/ai_chat/tools/DocumentSearchTool.ts (1)
90-101
: Signature change to accept LLMContext is fineOptional ctx param aligns this tool with the framework-wide context propagation.
front_end/panels/ai_chat/tools/FinalizeWithCritiqueTool.ts (1)
77-116
: Context propagation to CritiqueTool is correctly implemented
CritiqueTool.execute(args: CritiqueToolArgs, ctx?: LLMContext) matches the call site signature, so passingctx
is valid.front_end/panels/ai_chat/tools/BookmarkStoreTool.ts (2)
109-112
: Good: forwards LLM context into HTMLToMarkdownTool.This keeps downstream model selection consistent with the new ctx plumbing.
75-75
: Incorrect review comment: there are no manual.execute(…)
invocations of BookmarkStoreTool or HTMLToMarkdownTool in the codebase—tools are invoked solely via the framework’s generic registry (which passes ctx as needed), so no call‐site updates are required.Likely an incorrect or invalid review comment.
front_end/panels/ai_chat/ui/EvaluationDialog.ts (2)
203-209
: Runner config injection looks correct.Passing judgeModel + main/mini/nano decouples the runner from UI internals as intended.
215-221
: Vision runner config wiring is sound.Includes visionEnabled and the same model set; matches new constructor contract.
front_end/panels/ai_chat/tools/SchemaBasedExtractorTool.ts (3)
483-493
: Robust JSON parsing with strict→fallback chain is solid.Good balance of reliability and telemetry; null return on failure is clear to the caller.
575-585
: Same here: refinement parser flow LGTM.Consistent error handling and logging.
455-460
: No action required: all invocations of SchemaBasedExtractorTool.execute receive a populated ctx.nanoModel (via AgentNodes, CombinedExtractionTool and UI/test harness), so the strict guard is safe.front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)
314-328
: Mini/Nano propagation through handoffs is wired correctly.Passing miniModel/nanoModel into targetRunnerConfig ensures downstream tools/agents have the right variants.
front_end/panels/ai_chat/ui/AIChatPanel.ts (4)
562-566
: Placeholder constants usage looks good.Groq/OpenRouter/LiteLLM empty states now use MODEL_PLACEHOLDERS consistently; this prevents accidental initialization on placeholders.
Also applies to: 573-577, 585-589
1358-1365
: Initialize AgentService with mini/nano fallbacks — good wiring.Passing miniForInit and nanoForInit matches the extended initialize signature and ensures nano falls back to mini→main.
1286-1290
: EvaluationAgent now receives judge/mini/nano — align types downstream.Ensure EvaluationAgent’s constructor and any validators accept undefined/empty mini/nano gracefully.
If needed, I can scan the repo to confirm the new constructor signature is used consistently.
1251-1256
: Add sentinel guard in isPlaceholder check.Logic is fine; no action required.
front_end/panels/ai_chat/tools/Tools.ts (2)
833-845
: LLM calls: choose nanoModel when present (done) — good use of small model for verification.No changes requested.
Also applies to: 2152-2210, 2233-2276
511-629
: waitForPageLoad: solid overall approach with cleanup — approve.Races load/LCP/timeout and removes listeners.
Claude finished @tysonthomas9's task —— View job PR Code ReviewComprehensive review of the model config, OpenRouter OAuth, and model refresh fixes. Review Checklist:
Code Quality & Best Practices✅ Strengths
|
@codex review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codex Review: Here are some suggestions.
Reply with @codex fix comments
to fix any unresolved comments.
About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you open a pull request for review, mark a draft as ready, or comment "@codex review". If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex fix this CI failure" or "@codex address that feedback".
this.#evaluationAgent = new EvaluationAgent({ | ||
clientId: compositeClientId, | ||
endpoint: config.endpoint, | ||
secretKey: config.secretKey | ||
secretKey: config.secretKey, | ||
judgeModel: this.#selectedModel, | ||
miniModel: this.#miniModel, | ||
nanoModel: this.#nanoModel, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[P1] Pass validated mini/nano model IDs when creating EvaluationAgent
The new EvaluationAgent constructor requires miniModel
and nanoModel
, but #createEvaluationAgentIfNeeded
passes the raw this.#miniModel
/this.#nanoModel
fields. These fields are often empty strings for providers that don’t have explicit mini/nano entries (e.g. LiteLLM) and are only normalized in getters like AIChatPanel.getMiniModel()
which fall back to the main model. When the evaluation agent later calls AgentService.initialize
with those empty strings, the graph is built without mini/nano values and any agent using MODEL_SENTINELS.USE_MINI/NANO
throws “Mini model not provided…” during evaluation, even though normal chat fallback works. Use the validated getters (or apply the same fallback) before constructing the evaluation agent so evaluations don’t fail when the user hasn’t configured separate mini/nano models.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (2)
243-248
: Good: enforce provider/miniModel presence for extraction path.This aligns with earlier feedback and prevents accidental main-model use.
397-435
: Good: URL-retry path now enforces provider/miniModel and parses JSON.Matches previous ask; graceful null on non-JSON keeps control flow simple.
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)
18-22
: Move imports above logger; group per ESLint.Fix import/first and import/order by moving late imports before the logger.
-const logger = createLogger('AgentRunner'); - -import { ConfigurableAgentTool, ToolRegistry, type ConfigurableAgentArgs, type ConfigurableAgentResult, type AgentRunTerminationReason, type HandoffConfig /* , HandoffContextTransform, ContextFilterRegistry*/ } from './ConfigurableAgentTool.js'; -import { MODEL_SENTINELS } from '../core/Constants.js'; +import { ConfigurableAgentTool, ToolRegistry, type ConfigurableAgentArgs, type ConfigurableAgentResult, type AgentRunTerminationReason, type HandoffConfig /* , HandoffContextTransform, ContextFilterRegistry*/ } from './ConfigurableAgentTool.js'; +import { MODEL_SENTINELS } from '../core/Constants.js'; + +const logger = createLogger('AgentRunner');Tip: also place the Constants import before PageInfoManager to silence the remaining order warning.
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (1)
14-16
: Move late import above logger to satisfy import/first.Place AgentRunner import with the other imports before creating the logger.
-const logger = createLogger('ConfigurableAgentTool'); - -import { AgentRunner, type AgentRunnerConfig, type AgentRunnerHooks } from './AgentRunner.js'; +import { AgentRunner, type AgentRunnerConfig, type AgentRunnerHooks } from './AgentRunner.js'; + +const logger = createLogger('ConfigurableAgentTool');
🧹 Nitpick comments (8)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (2)
11-14
: Fix import order to satisfy import/order.Place LLMResponseParser import before LLMTracingWrapper and group consistently.
-import { callLLMWithTracing } from './LLMTracingWrapper.js'; -import type { Tool, LLMContext } from './Tools.js'; -import { LLMResponseParser } from '../LLM/LLMResponseParser.js'; +import { LLMResponseParser } from '../LLM/LLMResponseParser.js'; +import { callLLMWithTracing } from './LLMTracingWrapper.js'; +import type { Tool, LLMContext } from './Tools.js';
272-285
: Type safety + lint: replace any, remove trailing space, validate shape.Use unknown and narrow; also clears @typescript-eslint/no-explicit-any and trailing space.
- let parsed: any; + let parsed: unknown; try { parsed = LLMResponseParser.parseStrictJSON(text); } catch { parsed = LLMResponseParser.parseJSONWithFallbacks(text); } - - if (parsed && typeof parsed === 'object') { + if (parsed && (typeof parsed === 'object' || Array.isArray(parsed))) { logger.debug(`JSON extraction successful on attempt ${attempt}`); return parsed; } throw new Error('Parsed extraction result is not an object/array');Optional: validate against the provided schema (Ajv) before returning to catch structural drift.
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (2)
296-313
: Model sentinel resolution: handle function returning a sentinel.If targetConfig.modelName() returns USE_MINI/USE_NANO, reuse the existing branches to enforce mini/nano presence.
- resolvedModelName = targetConfig.modelName(); + const fnModel = targetConfig.modelName(); + if (fnModel === MODEL_SENTINELS.USE_MINI) { + if (!miniModel) throw new Error(`Mini model not provided for handoff to agent '${targetAgentName}'.`); + resolvedModelName = miniModel; + } else if (fnModel === MODEL_SENTINELS.USE_NANO) { + if (!nanoModel) throw new Error(`Nano model not provided for handoff to agent '${targetAgentName}'.`); + resolvedModelName = nanoModel; + } else { + resolvedModelName = fnModel; + }
976-989
: Replace any with LLMContext; fix ESLint no-explicit-any.Cast the execution context to LLMContext to keep types consistent with tools.
- toolResultData = await toolToExecute.execute(toolArgs as any, ({ - provider: config.provider, - model: modelName, - miniModel: config.mminiModel, - nanoModel: config.nanoModel, - getVisionCapability: config.getVisionCapability, - overrideSessionId: preallocatedChildId, - overrideParentSessionId: currentSession.sessionId, - overrideTraceId: execTracingContext?.traceId, - } as any)); + toolResultData = await toolToExecute.execute( + toolArgs as unknown, + { + provider: config.provider, + model: modelName, + miniModel: config.miniModel, + nanoModel: config.nanoModel, + getVisionCapability: config.getVisionCapability, + overrideSessionId: preallocatedChildId, + overrideParentSessionId: currentSession.sessionId, + overrideTraceId: execTracingContext?.traceId, + } as import('../tools/Tools.js').LLMContext + );front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (4)
10-12
: Resolve import/order groupings.Order type/value imports consistently (Constants and LLM types before local type-only imports). Running eslint --fix should auto-sort.
18-29
: Interface delimiter style + export: use semicolons.Fix ESLint member-delimiter-style and keep the exported shape.
export interface CallCtx { - provider?: LLMProvider, - model?: string, - miniModel?: string, - nanoModel?: string, - mainModel?: string, - getVisionCapability?: (modelName: string) => Promise<boolean> | boolean, - overrideSessionId?: string, - overrideParentSessionId?: string, - overrideTraceId?: string, + provider?: LLMProvider; + model?: string; + miniModel?: string; + nanoModel?: string; + mainModel?: string; + getVisionCapability?: (modelName: string) => Promise<boolean> | boolean; + overrideSessionId?: string; + overrideParentSessionId?: string; + overrideTraceId?: string; }
431-467
: Context-driven model/provider resolution: solid; remove trailing spaces.Logic for USE_MINI/USE_NANO and provider checks looks good. Clean trailing spaces to satisfy lint.
498-499
: Avoid any: keep callCtx typed when passing overrides.Use CallCtx instead of any.
- const ctx: any = callCtx || {}; + const ctx: CallCtx = callCtx || {};
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts
(8 hunks)front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts
(3 hunks)front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts
(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts (4)
front_end/panels/ai_chat/ui/AIChatPanel.ts (5)
logger
(920-1013)logger
(1304-1374)logger
(1397-1424)logger
(1531-1544)logger
(1588-1591)front_end/panels/ai_chat/LLM/LLMTypes.ts (1)
LLMProvider
(142-142)front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_SENTINELS
(85-88)front_end/panels/ai_chat/agent_framework/AgentRunner.ts (1)
AgentRunnerConfig
(26-41)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (2)
front_end/panels/ai_chat/ui/AIChatPanel.ts (8)
provider
(1431-1494)model
(1751-1764)text
(1850-1857)logger
(920-1013)logger
(1304-1374)logger
(1397-1424)logger
(1531-1544)logger
(1588-1591)front_end/panels/ai_chat/LLM/LLMResponseParser.ts (1)
LLMResponseParser
(13-306)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (3)
front_end/panels/ai_chat/core/Constants.ts (1)
MODEL_SENTINELS
(85-88)front_end/panels/ai_chat/ui/AIChatPanel.ts (5)
logger
(920-1013)logger
(1304-1374)logger
(1397-1424)logger
(1531-1544)logger
(1588-1591)front_end/panels/ai_chat/tracing/TracingConfig.ts (1)
getCurrentTracingContext
(299-301)
🪛 ESLint
front_end/panels/ai_chat/agent_framework/ConfigurableAgentTool.ts
[error] 10-10: There should be at least one empty line between import groups
(import/order)
[error] 10-10: ../core/Constants.js
import should occur before type import of ../tools/Tools.js
(import/order)
[error] 11-11: There should be at least one empty line between import groups
(import/order)
[error] 12-12: ../LLM/LLMTypes.js
type import should occur before type import of ../tools/Tools.js
(import/order)
[error] 16-16: Import in body of module; reorder to top.
(import/first)
[error] 16-16: ./AgentRunner.js
import should occur before type import of ./AgentSessionTypes.js
(import/order)
[error] 20-20: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 21-21: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 22-22: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 23-23: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 24-24: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 25-25: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 26-26: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 27-27: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 28-28: Expected a semicolon.
(@stylistic/member-delimiter-style)
[error] 429-429: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 432-432: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 457-457: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 462-462: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 467-467: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 471-471: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
[error] 498-498: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts
[error] 13-13: ../LLM/LLMResponseParser.js
import should occur before import of ./LLMTracingWrapper.js
(import/order)
[error] 274-274: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 280-280: Trailing spaces not allowed.
(@stylistic/no-trailing-spaces)
front_end/panels/ai_chat/agent_framework/AgentRunner.ts
[error] 21-21: Import in body of module; reorder to top.
(import/first)
[error] 21-21: ../core/Constants.js
import should occur before import of ../core/PageInfoManager.js
(import/order)
[error] 979-979: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: claude-review
🔇 Additional comments (3)
front_end/panels/ai_chat/tools/StreamlinedSchemaExtractorTool.ts (1)
182-189
: Guarded merge of retry results looks good.Only re-resolves on object results and aborts on repeated failures.
front_end/panels/ai_chat/agent_framework/AgentRunner.ts (2)
315-328
: Propagation of miniModel/nanoModel into target runner config looks correct.
1242-1245
: Good: pass mini/nano through max-iterations handoff.
#45) - New Features - Per-task model-size options (mini/nano) across agents, tools, and evaluations. - New LLM context support for tool executions and a new ObjectiveDrivenAction tool. - Automatic fetch of OpenRouter models after OAuth with UI update and event dispatch. - OpenRouter model caching with TTL, auto-refresh, and programmatic update hook. - Changes - Many agent/tool flows now prefer or require mini/nano models; evaluation/UI accept explicit main/mini/nano. - Tool execute signatures extended to accept LLM context; reduced verbose tool logging. - Centralized model/provider sentinels and placeholders; stricter JSON parsing for extractors. - Tests - Added unit tests covering OpenRouter model cache and refresh behavior.
Fix the model config issue, OpenRouter Oauth issue and model refresh issue
Summary by CodeRabbit
New Features
Changes
Tests